************************************************************** * 'STATISTICS AT SQUARE ONE' EXAMPLES & EXERCISES WITH SPSS * * 9th Edition of the book is free, available as e-book at: * * http://bmj.bmjjournals.com/collections/statsbk/index.shtml * ************************************************************** * (c) Marta Garcia-Granero (08/2005); biostatistics@terra.es * * Feel free to use this code for teaching, but acknowledge * * both the author's of the book & of this code * ************************************************************** * You can also buy a printed copy (10th edition, instead of 9th) of the book at: * http://www.bmjbookshop.com/shop/product_display.asp?&SiteLanguage=ENG&productid=0727915525 * This syntax has been designed as a companion to the book, don't run it without reading the e-book (or written edition) at the same time *. * Use the SPSS 'Command Syntax Reference' to learn more about SPSS commands & procedures (HELP -> COMMAND SYNTAX REFERENCE) *. ************* * CHAPTER 1 * ************* * http://bmj.bmjjournals.com/collections/statsbk/1.shtml * Table 1.2 Stem & Leaf Plot *. * You can use Data & Variable View to define a variable named "lead", and * type in the data manually, or you can use DATA LIST & VARIABLE LABEL. * There are two ways of using DATA LIST (LIST or FREE): *. DATA LIST LIST/lead(F8.1). BEGIN DATA 0.6 2.6 0.1 1.1 0.4 2.0 0.8 1.3 1.2 1.5 3.2 1.7 1.9 1.9 2.2 END DATA. * This is not practical for long datasets *. * This syntax will achieve the same task, with only 4 lines *. DATA LIST FREE/lead(F8.1). BEGIN DATA 0.6 2.6 0.1 1.1 0.4 2.0 0.8 1.3 1.2 1.5 3.2 1.7 1.9 1.9 2.2 END DATA. * Once the variable is created, we can label it *. VARIABLE LABEL lead 'Urinary conc. of lead (µmol/24 h)'. * A shortcut is *. VAR LAB lead 'Urinary conc. of lead (µmol/24 h)'. * Now, the stem&leaf plot *. EXAMINE VARIABLES=lead /PLOT STEMLEAF /STATISTICS NONE. * For odd sample sizes, the median is the data at position (n+1)/2 *. * You can sort the data, & find the one at position 8 (n=15 for this sample) *. SORT CASES BY lead(A). LIST. * Data #8: 1.5 --> Median value *. * Or you can use FREQUENCIES *. FREQUENCIES VARIABLES=lead /FORMAT=NOTABLE /STATISTICS=MEDIAN. * This is a shortcut for the previous command *. FRE VAR=lead/FOR=NOT/STA=MED. * Usually, the 3 first letters of a command are enough (but the syntax is more difficult to understand) *. * Table 1.3 Median of a sample (n=16) *. DATA LIST FREE/lead(F8.1). BEGIN DATA 0.2 0.3 0.6 0.7 0.8 1.5 1.7 1.8 1.9 1.9 2.0 2.0 2.1 2.8 3.1 3.4 END DATA. VARIABLE LABEL lead 'Lead concentration (µmol/24 h)'. * For even sample sizes, the median is the average of data at positions n/2 & n/2+1 *. * You can sort the data, find the ones in positions 8&9 and average them *. SORT CASES BY lead(A). LIST. * Data #8: 1.8; Data #9: 1.9; Average = 1.85 --> Median value *. * Or, again, you can use FREQUENCIES *. FREQUENCIES VARIABLES = lead /FORMAT = NOTABLE /STATISTICS = MEDIAN. * Table 1.2 (again) Measures of variation & quartiles *. DATA LIST FREE/lead(F8.1). BEGIN DATA 0.6 2.6 0.1 1.1 0.4 2.0 0.8 1.3 1.2 1.5 3.2 1.7 1.9 1.9 2.2 END DATA. VARIABLE LABEL lead 'Lead concentration (µmol/24 h)'. * Median, Q2 and percentile 50 are different names for the same statistic *. FREQUENCIES VARIABLES=lead /FORMAT=NOTABLE /PERCENTILES = 25 50 75 /STATISTICS=MINIMUM MAXIMUM. * For Q1 & Q3, SPSS gives differents values from the ones reported in the book *. * The quartiles given in the book are the "Tukey's hinges"; they can be computed with SPSS using EXAMINE *. EXAMINE VARIABLES=lead /PLOT=NONE /PERCENTILES (25,50,75) /STATISTICS NONE /NOTOTAL. * In shortcut version *. EXA VAR=lead/PLOT NON/PER(25,50,75)/STA NON/NOT. * Advanced: Tukey's hinges with MATRIX (a powerful programming tool), take a look at the SPSS syntax guide to learn more about it *. MATRIX. * Read unsorted data & store variable name in "vname" *. GET nsdata /VAR=lead /NAME=vname /MISSING=OMIT. /* Performs a listwise deletion of missing data to avoid errors *. * Ordered array: sorting algorithm by R Ristow & J Peck *. COMPUTE data=nsdata. COMPUTE data(GRADE(nsdata))=nsdata. RELEASE nsdata. /* Get rid of unsorted data *. * Formulae for Tukey's hinges in odd or even samples *. * To understand how Q1 & Q3 are computed for different sample sizes, read: * http://www2.sjsu.edu/faculty/gerstman/StatPrimer/sumstats.pdf. COMPUTE n=NROW(data). COMPUTE pair=(TRUNC(n/2) EQ n/2). /* Check if n is odd (0) or even (1) *. DO IF pair EQ 1. /* Tukey's hinges for even samples *. - COMPUTE lmedian=data(n/2). - COMPUTE umedian=data(1+n/2). - COMPUTE median=(lmedian+umedian)/2. /* Median is the average of two values *. - DO IF TRUNC(n/4) NE (n/4)). - COMPUTE q1=data((1+(n/2))/2). - COMPUTE q3=data((1+(3*n/2))/2). - ELSE. - COMPUTE lq1=data(n/4). - COMPUTE uq1=data(1+n/4). - COMPUTE q1=(lq1+uq1)/2. - COMPUTE lq3=data(3*n/4). - COMPUTE uq3=data(1+3*n/4). - COMPUTE q3=(lq3+uq3)/2. - END IF. ELSE. /* Tukey's hinges for odd samples *. - COMPUTE median=data((n+1)/2). /* Median is one value of the sample *. - DO IF TRUNC((1+n)/4) EQ (1+n)/4). - COMPUTE lq1=data((n+1)/4). - COMPUTE uq1=data(1+(n+1)/4). - COMPUTE q1=(lq1+uq1)/2. - COMPUTE lq3=data(3*(n+1)/4-1). - COMPUTE uq3=data(3*(n+1)/4). - COMPUTE q3=(lq3+uq3)/2. - ELSE. - COMPUTE q1=data(((1+(n+1)/2))/2). - COMPUTE q3=data(((3*(n+1)/2)-1)/2). - END IF. END IF. * Report (makes use of the variable name stored in vname) *. PRINT T(data) /FORMAT='F4.1' /RNAME=vname /TITLE='Sorted data'. PRINT n /FORMAT='F8.0' /TITLE='Sample size'. PRINT {q1,median,q3} /FORMAT='F10.3' /RNAME=vname /CLABEL='Q1(P25)','Median','Q3(P75)' /TITLE="Tukey's Hinges". END MATRIX. * Table 1.2 & 1.3 together for display *. DATA LIST FREE/ group (F8.0) lead(F8.1). BEGIN DATA 1 0.6 1 2.6 1 0.1 1 1.1 1 0.4 1 2.0 1 0.8 1 1.3 1 1.2 1 1.5 1 3.2 1 1.7 1 1.9 1 1.9 1 2.2 2 0.2 2 0.3 2 0.6 2 0.7 2 0.8 2 1.5 2 1.7 2 1.8 2 1.9 2 1.9 2 2.0 2 2.0 2 2.1 2 2.8 2 3.1 2 3.4 END DATA. VARIABLE LABEL lead 'Lead concentration (µmol/24 h)'. * We can assign a label to each value of the variable "group". VALUE LABEL group 1'Urban children' 2 'Rural children'. * Box-whisker plot (Figure 1.4); see the use of the keyword "BY" *. EXAMINE VARIABLES=lead BY group /PLOT=BOXPLOT /STATISTICS=NONE /PERCENTILES(50) /NOTOTAL. * Create dot plot with median values (Figure 1.3) I haven't found a straight way of doing it with SPSS *. * First, we need to add the median for both samples to the dataset *. AGGREGATE /OUTFILE='C:\Temp\medians.sav' /BREAK=group /lead = MEDIAN(lead). ADD FILES /FILE=* /FILE='C:\Temp\medians.sav' /IN=source. VALUE LABEL source 0'Data' 1'Median'. * To see the effect in the dataset, use EXECUTE *. EXECUTE . /* Now, go to the Data View and take a look *. * Instead of the previous steps, you can manually type the 2 data (the medians) in the dataset, and add a binary variable called "source" (0 for data; 1 for medians) *. * Now, the graph: a bit of editing will be required to make it look OK *. GRAPH /SCATTERPLOT(BIVAR)=group WITH lead BY source /TITLE='Dot plot of urinary lead concentration for urban & rural children'. * Table 1.4 Histogram & statistics for grouped data (Figure 1.5) *. DATA LIST LIST/midpoint(F8.1) count(F8.0). BEGIN DATA 0.2 2 0.6 7 1.0 10 1.4 16 1.8 23 2.2 28 2.6 19 3.0 16 3.4 11 3.8 7 4.2 1 END DATA. VALUE LABEL midpoint 0.2 '0.0-0.4' 0.6 '0.4-0.8' 1.0 '0.8-1.2' 1.4 '1.2-1.6' 1.8 '1.6-2.0' 2.2 '2.0-2.4' 2.6 '2.4-2.8' 3.0 '2.8-3.2' 3.4 '3.2-3.6' 3.8 '3.6-4.0' 4.2 '4.0-4.4'. VARIABLE LABEL midpoint 'Lead concentration (µmol/24 h)'. * Instead of a dataset with 2+7+...+7+1 (=140) rows, with SPSS you can use a weighting variable: *. WEIGHT BY count. * A bit of editing is required until the histogram looks OK (set the number of intervals to 11, fix the limits, etc) *. FREQUENCIES VARIABLES=midpoint /PERCENTILES = 25 50 75 /STATISTICS=STDDEV MEAN /GROUPED= midpoint /HISTOGRAM . * EXTRA: we inform the program that the data are grouped using "/GROUPED = "; the percentiles will be adjusted for grouping *. * Bar charts for qualitative variables (Figure 1.6) *. DATA LIST LIST/living survey censusp (3 F8.0). BEGIN DATA 1 20 50 2 70 30 3 50 20 END DATA. VALUE LABEL living 1 'Owner occupier' 2 'Council housing' 3 'Private rental'. * Observed counts have to be converted to percentages *. COMPUTE surveyp=100*survey/140. * Graphing observed (surveyp) vs expected (census) percentages *. GRAPH /BAR(GROUPED)=VALUE(surveyp censusp) BY living . * Exercise 1.1 Median, range & quartiles for a continuous variable *. DATA LIST FREE/copper(F8.2). BEGIN DATA 0.70 0.45 0.72 0.30 1.16 0.69 0.83 0.74 1.24 0.77 0.65 0.76 0.42 0.94 0.36 0.98 0.64 0.90 0.63 0.55 0.78 0.10 0.52 0.42 0.58 0.62 1.12 0.86 0.74 1.04 0.65 0.66 0.81 0.48 0.85 0.75 0.73 0.50 0.34 0.88 END DATA. VARIABLE LABEL copper'Urinary copper (µmol/24hr)'. FREQUENCIES VARIABLES=copper /FORMAT=NOTABLE /STATISTICS=MINIMUM MAXIMUM. EXAMINE VARIABLES=copper /PLOT BOXPLOT STEMLEAF /PERCENTILES (25,50,75) /STATISTICS NONE /NOTOTAL. MATRIX. GET nsdata /VAR=copper /NAME=vname /MISSING=OMIT. COMPUTE data=nsdata. COMPUTE data(GRADE(nsdata))=nsdata. RELEASE nsdata. COMPUTE n=NROW(data). COMPUTE pair=(TRUNC(n/2) EQ n/2). DO IF pair EQ 1. - COMPUTE lmedian=data(n/2). - COMPUTE umedian=data(1+n/2). - COMPUTE median=(lmedian+umedian)/2. - DO IF TRUNC(n/4) NE (n/4)). - COMPUTE q1=data((1+(n/2))/2). - COMPUTE q3=data((1+(3*n/2))/2). - ELSE. - COMPUTE lq1=data(n/4). - COMPUTE uq1=data(1+n/4). - COMPUTE q1=(lq1+uq1)/2. - COMPUTE lq3=data(3*n/4). - COMPUTE uq3=data(1+3*n/4). - COMPUTE q3=(lq3+uq3)/2. - END IF. ELSE. - COMPUTE median=data((n+1)/2). - DO IF TRUNC((1+n)/4) EQ (1+n)/4). - COMPUTE lq1=data((n+1)/4). - COMPUTE uq1=data(1+(n+1)/4). - COMPUTE q1=(lq1+uq1)/2. - COMPUTE lq3=data(3*(n+1)/4-1). - COMPUTE uq3=data(3*(n+1)/4). - COMPUTE q3=(lq3+uq3)/2. - ELSE. - COMPUTE q1=data(((1+(n+1)/2))/2). - COMPUTE q3=data(((3*(n+1)/2)-1)/2). - END IF. END IF. PRINT n /FORMAT='F8.0' /TITLE='Sample size'. PRINT {q1,median,q3} /FORMAT='F10.3' /RNAME=vname /CLABEL='Q1(P25)','Median','Q3(P75)' /TITLE="Tukey's Hinges". * Extra: Min & Max data are also printed *. PRINT {CMIN(data),CMAX(data)} /FORMAT='F10.2' /RNAME=vname /CLABEL='Min','Max' /TITLE="Range". END MATRIX. * Extra material on the topic: * http://www.deakin.edu.au/~agoodman/sci101/chap10.php ************* * CHAPTER 2 * ************* * http://bmj.bmjjournals.com/collections/statsbk/2.shtml * Figure 2.1 Mean ± 1, 2 & 3 SD *. DATA LIST FREE/midpoint count (2 F8.0). BEGIN DATA 52 1 57 3 62 15 67 33 72 58 77 94 82 98 87 89 92 56 97 32 102 14 107 5 112 2 END DATA. WEIGHT BY count . VALUE LABEL midpoint 52 '[50-55)' 57 '[55-60)' 62 '[60-65)' 67 '[65-70)' 72 '[70-75)' 77 '[75-80)' 82 '[80-85)' 87 '[85-90)' 92 '[90-95)' 97'[95-100)'102'[100-105)'107'[105-110)' 112'[110-115]'. * 'Mean ± k·sd limits' using MATRIX *. MATRIX. PRINT /TITLE='Mean ± 1, 2 & 3 SD '. * Copy data from active dataset *. GET counts /VAR=count. GET xdata /VAR=midpoint /NAMES=vname. * This computes sample size (sum of counts) *. COMPUTE n=CSUM(counts). * This computes the sample mean *. COMPUTE mean=CSUM(xdata&*counts)/n. * Shortcut formula for Standard Deviation (don't try to understand it yet) *. COMPUTE sd=SQRT((CSUM(counts&*xdata&**2)-n&*(mean**2))/(n-1)). * Limits *. COMPUTE lower1=mean-sd. COMPUTE upper1=mean+sd. COMPUTE lower2=mean-2*sd. COMPUTE upper2=mean+2*sd. COMPUTE lower3=mean-3*sd. COMPUTE upper3=mean+3*sd. * Reports *. PRINT n /FORMAT='F8.0' /RNAME=vname /TITLE='Sample size for variable:'. PRINT {mean,sd} /FORMAT='F8.2' /CLABEL='Mean','SD' /RNAME=vname /TITLE='Sample mean & SD for variable:'. PRINT {lower1,upper1;lower2,upper2;lower3,upper3} /FORMAT='F8.1' /CLABEL='Lower','Upper' /RLABEL='Mean±1SD','Mean±2SD','Mean±3SD' /TITLE='Limits'. END MATRIX. * Histogram with normal curve *. FREQUENCIES VARIABLES=midpoint /HISTOGRAM NORMAL. * Edit the graph & add manually the reference lines at mean ± 1, 2 & 3 SD *. * Table 2.1 (standard deviation of data from T1.2) *. DATA LIST FREE/lead(F8.1). BEGIN DATA 0.6 2.6 0.1 1.1 0.4 2.0 0.8 1.3 1.2 1.5 3.2 1.7 1.9 1.9 2.2 END DATA. VARIABLE LABEL lead 'Urinary conc. of lead (µmol/24 h)'. DESCRIPTIVES VARIABLES=lead /STATISTICS= STDDEV VARIANCE . * Advanced: Manual calculations using MATRIX *. MATRIX. PRINT /TITLE='Calculation of Standard Deviation & Variance'. * The following line reads any variable in the dataset & stores the name *. GET nsdata /VAR=ALL /NAME=vname /MISSING=OMIT. * Sorting algorithm by R Ristow & J Peck *. COMPUTE data=nsdata. COMPUTE data(GRADE(nsdata))=nsdata. RELEASE nsdata. * Statistics *. COMPUTE n=NROW(data). /* Sample size computed counting number of data directly *. COMPUTE mean=CSUM(data)/n. * Definition of variance & standard deviation *. COMPUTE res=data-mean. /* Differences from mean *. COMPUTE variance=CSUM(res&**2)/(n-1). COMPUTE sd=SQRT(variance). * Reports *. COMPUTE tname={vname,'Diffs.','Diffs˛','Data˛'}. PRINT {data,res,res&**2,data&**2; CSUM(data),CSUM(res),CSUM(res&**2),CSUM(data&**2)} /FORMAT='F8.2' /CNAME=tname /TITLE='Table 2.1 Calculation of standard deviation'. PRINT n /FORMAT='F8.0' /CLABEL='N' /RNAME=vname /TITLE='Sample size'. PRINT mean /FORMAT='F8.2' /CLABEL='Mean' /RNAME=vname /TITLE='Sample mean'. PRINT {variance,sd} /FORMAT='F8.4' /CLABEL='Variance','Std. Dev' /RNAME=vname /TITLE='Variance & Standard Deviation'. END MATRIX. * Table 2.2 Standard deviation from grouped data *. DATA LIST LIST/visits counts (2 F8.0). BEGIN DATA 0 2 1 8 2 27 3 45 4 38 5 15 6 4 7 1 END DATA. WEIGHT BY counts . FREQUENCIES VARIABLES=visits /FORMAT=NOTABLE /STATISTICS=STDDEV VARIANCE MEAN MEDIAN SKEWNESS SESKEW. * Skewness index is different from 0 in skewed distributions, also, you can compare mean & median to see if they differ considerably *. * Advanced: Manual calculations with MATRIX *. MATRIX. PRINT /TITLE='Calculation of Standard Deviation & Variance from grouped data'. GET counts /VAR = counts. GET xdata /VAR = visits /NAMES = vname. * Calculations *. COMPUTE n=CSUM(counts). COMPUTE mean=CSUM(xdata&*counts)/n. COMPUTE variance=(CSUM(counts&*xdata&**2)-(CSUM(xdata&*counts)**2)/n)/(n-1). COMPUTE sd=SQRT(variance). COMPUTE lower=mean-2*sd. COMPUTE upper=mean+2*sd. * This rather complicated loop computes the number of data below the lower value or above the upper value of the reference range *. COMPUTE flag=0. LOOP i=1 TO NROW(xdata). - COMPUTE flag=flag + counts(i)*((xdata(i) LT lower)+(xdata(i) GT upper)). END LOOP. * Reports *. PRINT n /FORMAT='F8.0' /TITLE='Sample size'. PRINT mean /FORMAT='F8.2' /TITLE='Sample mean'. COMPUTE tname={vname,'Counts','C2 x C1','C1˛','C2 x C4'}. PRINT {xdata,counts,xdata&*counts,xdata&**2,counts&*xdata&**2; CSUM(xdata),CSUM(counts),CSUM(xdata&*counts),CSUM(xdata&**2),CSUM(counts&*xdata&**2)} /FORMAT='F8.0' /CNAME=tname /TITLE='Table 2.2 Calculation of standard deviation from discrete data'. PRINT {variance,sd} /FORMAT='F8.3' /CLABEL='Variance','Std. Dev' /TITLE='Variance & Standard Deviation'. PRINT {lower,upper} /FORMAT='F8.2' /CLABEL='Lower','Upper' /TITLE='Reference range'. PRINT {flag,100*flag/n} /FORMAT='F8.1' /CLABEL='Number','%' /TITLE='Number & Percent data outside the 95% range'. END MATRIX. * The book is wrong (it says 8 subjects, 5.7%, instead of 7, 5%) *. * Table 2.3 Pain score on 7 patients (mm) *. DATA LIST FREE/pain (F8.0). BEGIN DATA 1 1 2 3 3 6 56 END DATA. * Dot plot (Figure 2.2) *. COMPUTE logpain = LN(pain) . SUMMARIZE /TABLES=pain logpain /FORMAT=LIST NOCASENUM TOTAL /TITLE='Original & Log-transformed data' /CELLS=COUNT . * Dot plots (a bit ugly, need a lot of editing) *. COMPUTE x = 1 . FORMAT x(F8.0). VARIABLE LABEL x 'Pain scores in 7 patients'. GRAPH /SCATTERPLOT(BIVAR)=x WITH pain /TITLE='Original data'. GRAPH /SCATTERPLOT(BIVAR)=x WITH logpain /TITLE='Log-transformed data'. * Descriptives (geometric mean is calculated with the MEANS procedure) *. MEANS TABLES=pain /CELLS MEAN STDDEV MEDIAN GEOMETRIC . MEANS TABLES=logpain /CELLS MEAN MEDIAN . * Check with a calculator that exp(median logpain) is median(pain), and that exp(mean logpain) is geometric mean(pain) *. ********* * EXTRA * ********* * This part belongs to the written edition of the book (10th edition), it doesn't appear in the Web edition (9th edition) *. * Table 2.4 *. DATA LIST FREE/therapy outcome count (3 F8.0). BEGIN DATA 1 1 35 1 2 41 2 1 49 2 2 26 END DATA. VARIABLE LABEL therapy 'Treatment of PHVD' /outcome 'Outcome in premature babies'. VALUE LABEL therapy 1'Standard therapy' 2'Drug+standard therapy' /outcome 1'Death/Shunt' 2'No Death/Shunt'. WEIGHT BY count. * Although the data are already sorted, to make sure they are OK, we'll sort them (the MATRIX code below requires sorted data) *. SORT CASES BY therapy (A) outcome(A). MATRIX. GET data /VAR=count /MISSING=ACCEPT /SYSMISS=0. /* Missing frequencies are replaced by 0 *. COMPUTE a=data(1). COMPUTE b=data(2). COMPUTE c=data(3). COMPUTE d=data(4). PRINT {a,b,a+b;c,d,c+d;a+c,b+d,a+b+c+d} /FORMAT='F8.0' /RLABEL='Row 1','Row 2','Total' /CLABEL='Col 1','Col 2','Total' /TITLE='Input data'. COMPUTE num=data(1:3:2). /* This means: read data from 1 to 3, skipping every 2 *. COMPUTE den={CSUM(data(1:2));CSUM(data(3:4))}. RELEASE data. COMPUTE p=num/den. COMPUTE arr=p(1)-p(2). COMPUTE nn=1/ABS(arr). DO IF arr LE 0. - PRINT {arr,nn} /FORMAT='F8.3' /CLABEL='ARR','NNH' /TITLE='Absolute Risk Reduction & Number Needed to Harm'. ELSE. - PRINT {arr,nn} /FORMAT='F8.3' /CLABEL='ARR','NNT' /TITLE='Absolute Risk Reduction & Number Needed to Treat'. END IF. COMPUTE rr=p(1)/p(2). COMPUTE rrr=ABS(1-rr). PRINT {rr,rrr} /FORMAT='F8.2' /CLABEL='RR','RRR' /TITLE='Relative Risk & Relative Risk Reduction'. END MATRIX. * RR can be computed with SPSS (with the advantage of having also 95%CI) *. * This will also answer exercise 8.5 of the written edition of the book *. CROSSTABS /TABLES=therapy BY outcome /FORMAT= AVALUE TABLES /STATISTICS= RISK /CELLS= COUNT ROW. * Table 2.5 *. DATA LIST FREE/eczema hayfever count (3 F8.0). BEGIN DATA 1 1 141 1 2 420 2 1 928 2 2 13525 END DATA. VARIABLE LABEL eczema 'Eczema' /hayfever 'Hay fever'. VALUE LABEL eczema 1'E. present' 2'E. Absent' /hayfever 1'HF. present' 2'HF. Absent'. WEIGHT BY count. * Although the data are already sorted, to make sure they are OK, we'll sort them *. SORT CASES BY eczema (A) hayfever(A). MATRIX. GET data /VAR=count /MISSING=ACCEPT /SYSMISS=0. COMPUTE a=data(1). COMPUTE b=data(2). COMPUTE c=data(3). COMPUTE d=data(4). PRINT {a,b,a+b;c,d,c+d;a+c,b+d,a+b+c+d} /FORMAT='F8.0' /RLABEL='Row 1','Row 2','Total' /CLABEL='Col 1','Col 2','Total' /TITLE='Input data'. * Computing OR & RR by columns *. COMPUTE num=data(1:2). COMPUTE den={CSUM(data(1:3:2));CSUM(data(2:4:2))}. COMPUTE p=num/den. COMPUTE odd=p/(1-p). COMPUTE oddratio=odd(1)/odd(2). COMPUTE rr=p(1)/p(2). PRINT {p,odd} /FORMAT='F8.3' /CLABEL='Risk','Odd' /TITLE='Risks & Odds by columns'. PRINT {oddratio,rr} /FORMAT='F8.3' /CLABEL='OR','RR' /TITLE='Odds Ratio and Relative Risk'. * Computing OR & RR by rows *. COMPUTE num=data(1:3:2). COMPUTE den={CSUM(data(1:2));CSUM(data(3:4))}. COMPUTE p=num/den. COMPUTE odd=p/(1-p). COMPUTE oddratio=odd(1)/odd(2). COMPUTE rr=p(1)/p(2). PRINT {p,odd} /FORMAT='F8.3' /CLABEL='Risk','Odd' /TITLE='Risks & Odds by rows'. PRINT {oddratio,rr} /FORMAT='F8.3' /CLABEL='OR','RR' /TITLE='Odds Ratio and Relative Risk'. END MATRIX. * This will also answer exercise 8.6 of the written edition of the book *. CROSSTABS /TABLES= hayfever BY eczema /FORMAT= AVALUE TABLES /STATISTICS= RISK /CELLS= COUNT ROW COLUMN. CROSSTABS /TABLES=eczema BY hayfever /FORMAT= AVALUE TABLES /STATISTICS= RISK /CELLS= COUNT ROW COLUMN. *********************** * END OF EXTRA TABLES * *********************** * Exercise 2.1 to 2.3 (2.1 & 2.2 in written edition of the book) *. DATA LIST LIST /vaccine count (2 F8.0). BEGIN DATA 0 12 1 24 3 42 3 38 4 30 5 4 END DATA. WEIGHT BY count . VARIABLE LABEL vaccine'Nr. of times vaccinated'. FREQUENCIES VARIABLES=vaccine /BARCHART FREQ /STATISTICS=STDDEV MEAN. * Computing reference ranges *. MATRIX. PRINT /TITLE='Reference range'. GET counts /VAR = count. GET xdata /VAR = vaccine /NAME=vname. PRINT {xdata,counts} /FORMAT='F8.0' /CNAME=vname /TITLE='Input data'. COMPUTE n=CSUM(counts). COMPUTE mean=CSUM(xdata&*counts)/n. COMPUTE sum2=CSUM(counts&*xdata&**2). COMPUTE variance=(sum2-n&*(mean**2))/(n-1). COMPUTE sd=SQRT(variance). COMPUTE lower=mean-2*sd. COMPUTE upper=mean+2*sd. COMPUTE flag=0. LOOP i=1 TO NROW(xdata). - COMPUTE flag=flag+counts(i)*((xdata(i) LT lower)+(xdata(i) GT upper)). END LOOP. PRINT n /FORMAT='F8.0' /TITLE='Sample size'. PRINT {mean,sd} /FORMAT='F8.2' /CLABEL='Mean','SD' /TITLE='Sample mean & SD'. PRINT {lower,upper} /FORMAT='F8.1' /CLABEL='Lower','Upper' /TITLE='Reference range'. PRINT {100*flag/n} /FORMAT='F8.1' /TITLE='Percent data outside the 95% range'. END MATRIX. * EXTRA: Exercise 2.3 (only in written edition of the book) *. DATA LIST FREE/gender vomit count (3 F8.0). BEGIN DATA 1 1 61 1 2 161 2 1 37 2 2 204 END DATA. VARIABLE LABEL gender 'Gender' /vomit 'Nausea/vomiting in recovery room'. VALUE LABEL gender 1'Female' 2'Male' /vomit 1'Yes' 2'No'. WEIGHT BY count. CROSSTABS /TABLES=gender BY vomit /FORMAT= AVALUE TABLES /STATISTICS= RISK /CELLS= COUNT ROW . SORT CASES BY gender(A) vomit(A). MATRIX. PRINT /TITLE='OR & RR'. GET data /VAR=count /MISSING=ACCEPT /SYSMISS=0. COMPUTE a=data(1). COMPUTE b=data(2). COMPUTE c=data(3). COMPUTE d=data(4). RELEASE data. PRINT {a,b,a+b;c,d,c+d;a+c,b+d,a+b+c+d} /FORMAT='F8.0' /RLABEL='Row 1','Row 2','Total' /CLABEL='Col 1','Col 2','Total' /TITLE='Input data'. * Shortcut formula for OR & RR *. COMPUTE oddratio=(a*d)/(b*c). COMPUTE rrisk=(a/(a+b))/(c/(c+d)). PRINT {oddratio,rrisk} /FORMAT='F8.3' /CLABEL='OR','RR' /TITLE='Odds Ratio and Relative Risk'. END MATRIX. ************* * CHAPTER 3 * ************* * http://bmj.bmjjournals.com/collections/statsbk/3.shtml * Table 3.1 Mean distolic pressures of printers and farmers *. DATA LIST LIST/group(A8) n(F8.0) mean(F8.0) sd(F8.1). BEGIN DATA Printers 72 88 4.5 Farmers 48 79 4.2 END DATA. COMPUTE sem = sd/SQRT(n) . * See the use of the keyword ALL to indicate every variable in the dataset *. SUMMARIZE /TABLES=ALL /FORMAT=LIST NOCASENUM TOTAL /TITLE='Standard error of the mean' /CELLS=NONE. * Standard error of a percentage (data in text, no table) *. DATA LIST LIST/gender (A8) count (F8.0). BEGIN DATA Women 73 Men 47 END DATA. WEIGHT BY count . * MATRIX can also be used for this task *. MATRIX. GET count /VAR=count. GET groups/VAR=gender. COMPUTE n=CSUM(count). COMPUTE p=100*count/n. COMPUTE sep=SQRT(p&*(100-p)/n). PRINT count /FORMAT='F8.0' /CLABEL='Counts' /RNAME=groups /TITLE='Input data'. PRINT {p(1),sep(1)} /FORMAT='F8.2' /CLABEL='Percent','SE(P)' /RNAME=groups /TITLE='Percentage & Standard Error'. END MATRIX. * Exercise 3.1 *. DATA LIST LIST/n(F8.0) mean(F8.2) sd(F8.2). BEGIN DATA 140 2.18 0.87 END DATA. COMPUTE sem = sd/SQRT(n) . FORMAT sem(F8.3). SUMMARIZE /TABLES=ALL /FORMAT=LIST NOCASENUM TOTAL /TITLE='Standard error of the mean' /CELLS=NONE. ************* * CHAPTER 4 * ************* * http://bmj.bmjjournals.com/collections/statsbk/4.shtml * Reference ranges with data from chapter 1 (Table 1.4) *. DATA LIST LIST/midpoint(F8.1) count(F8.0). BEGIN DATA 0.2 2 0.6 7 1.0 10 1.4 16 1.8 23 2.2 28 2.6 19 3.0 16 3.4 11 3.8 7 4.2 1 END DATA. WEIGHT BY count . MATRIX. GET counts /VAR = count. GET xdata /VAR = midpoint. COMPUTE n=CSUM(counts). COMPUTE mean=CSUM(xdata&*counts)/n. COMPUTE sd=SQRT((CSUM(counts&*xdata&**2)-n&*(mean**2))/(n-1)). COMPUTE lower=mean-1.96*sd. COMPUTE upper=mean+1.96*sd. COMPUTE flag=0. LOOP i=1 TO NROW(xdata). - COMPUTE flag=flag+counts(i)*((xdata(i) LT lower)+(xdata(i) GT upper)). END LOOP. COMPUTE zvalue=(4.8-mean)/sd. /* Probability of a value GT 4.8 *. COMPUTE zprob=2*(1-CDFNORM(ABS(zvalue))). PRINT n /FORMAT='F8.0' /TITLE='Sample size'. PRINT {mean,sd} /FORMAT='F8.2' /CLABEL='Mean','SD' /TITLE='Statistics'. PRINT {lower,upper} /FORMAT='F8.2' /CLABEL='Lower','Upper' /TITLE='Reference range'. PRINT {100*flag/n} /FORMAT='F8.1' /TITLE='Percent data outside the 95% range'. PRINT {4.8,zvalue,zprob} /FORMAT='F8.3' /CLABEL='Value','Nr. SD','Prob' /TITLE='Probability of finding an observed value'. END MATRIX. * Reference Ranges vs 95% Confidence Intervals (data from T3.1) *. DATA LIST LIST/group(A8) n(F8.0) mean(F8.0) sd(F8.1). BEGIN DATA Printers 72 88 4.5 Farmers 48 79 4.2 END DATA. * For this exercise, the book uses 1.96*sd instead of 2*sd (see text) *. COMPUTE lower1 = mean - 1.96*sd . COMPUTE upper1 = mean + 1.96*sd . COMPUTE lower2 = mean - 1.96*sd/SQRT(n) . COMPUTE upper2 = mean + 1.96*sd/SQRT(n) . VARIABLE LABEL lower1 'Lower Ref.Range' /upper1 'Upper Ref.Range' lower2 'Lower 95%CI' /upper2 'Upper 95%CI'. * See the use of the keyword TO to indicate a list of consecutive variables *. SUMMARIZE /TABLES=group lower1 TO upper2 /FORMAT=LIST NOCASENUM TOTAL /TITLE='Reference Ranges vs 95% CI' /CELLS=NONE. * 95% CI for a percentage *. DATA LIST LIST/gender (A8) count (F8.0). BEGIN DATA Women 73 Men 47 END DATA. WEIGHT BY count . FREQUENCIES VARIABLE=gender. MATRIX. GET count /VAR=count. GET groups/VAR=gender. COMPUTE n=CSUM(count). COMPUTE p=100*count/n. COMPUTE sep=SQRT(p&*(100-p)/n). COMPUTE lower=p-1.96*sep. COMPUTE upper=p+1.96*sep. PRINT count /FORMAT='F8.0' /CLABEL='Counts' /RNAME=groups /TITLE='Input data'. PRINT {p,lower,upper} /FORMAT='F8.2' /CLABEL='Percent','Lower95%','Upper95%' /RNAME=groups /TITLE='Percentage & 95% CI'. END MATRIX. * Exercise 4.1 & 4.2 *. DATA LIST LIST/mean(F8.0) sd(F8.1) n(F8.0). BEGIN DATA 35 11.6 100 END DATA. COMPUTE lower1 = mean - 1.96*sd . COMPUTE upper1 = mean + 1.96*sd . COMPUTE lower2 = mean - 1.96*sd/SQRT(n) . COMPUTE upper2 = mean + 1.96*sd/SQRT(n) . VARIABLE LABEL lower1'Lower Ref.Range'/upper1'Upper Ref.Range' lower2'Lower 95%CI'/upper2'Upper 95%CI'. SUMMARIZE /TABLES= lower1 TO upper2 /FORMAT=LIST NOCASENUM TOTAL /TITLE='Reference range & 95% CI for malaria parasites' /CELLS=NONE. ************* * CHAPTER 5 * ************* * http://bmj.bmjjournals.com/collections/statsbk/5.shtml * Table 5.1 Mean distolic blood pressures for printers and farmers *. DATA LIST LIST/group(A8) n(F8.0) mean(F8.0) sd(F8.1). BEGIN DATA Printers 72 88 4.5 Farmers 48 79 4.2 END DATA. * With aggregated (summary data), testing can be done with MATRIX *. MATRIX. PRINT /TITLE='95%CI for a difference of means & Z tests'. GET gnames /VAR = group. GET mean /VAR = mean. GET sd /VAR = sd. GET n /VAR = n. COMPUTE sediff=SQRT(MSUM(sd&**2/n)). COMPUTE lower=(mean(1)-mean(2))-1.96*sediff. COMPUTE upper=(mean(1)-mean(2))+1.96*sediff. COMPUTE zvalue=(mean(1)-mean(2))/sediff. COMPUTE zsig=1-CDFNORM(ABS(zvalue)). PRINT {mean,sd,n} /FORMAT='F8.1' /CLABEL='Mean','SD','N' /RNAME=gnames /TITLE='Input data'. PRINT sediff /FORMAT='F8.2' /TITLE='SE(diff)'. PRINT {mean(1)-mean(2),lower,upper} /FORMAT='F8.2' /CLABEL='MeanDiff','Lower','Upper' /TITLE='95% CI for difference in means'. PRINT {zvalue,zsig,2*zsig} /FORMAT='F8.3' /CLABEL='Z-value','One-tailed','2-tailed' /TITLE='Z test and significance for difference in means'. END MATRIX. * From text: Eryhrocytes in men (one sample mean Z test) *. DATA LIST LIST/mu(F8.1) n(F8.0) mean(F8.2) sd(F8.1). BEGIN DATA 5.5 100 5.35 1.1 END DATA. COMPUTE sem=sd/SQRT(n). COMPUTE zvalue=(mean-mu)/sem. * The concept of two-tailed p-value: *. *** ** ** * * * * * * *| |* *|| ||* .**|||| ||||**. *******|*****************|******* * -1.36 1.36 *. COMPUTE pvalue =2*(1-CDF.NORMAL(ABS(zvalue),0,1)). FORMAT pvalue (F8.3). SUMMARIZE /TABLES=ALL /FORMAT=LIST NOCASENUM TOTAL /TITLE='One sample mean test (Z)' /CELLS=NONE. * Advanced: MATRIX can also be used: *. MATRIX. PRINT /TITLE='One mean Z test'. GET mu /VAR = mu. GET mean /VAR = mean. GET sd /VAR = sd. GET n /VAR = n. COMPUTE zvalue=(mean-mu)/(sd/SQRT(n)). COMPUTE zsig=1-CDFNORM(ABS(zvalue)). PRINT n /FORMAT='F8.0' /TITLE='Sample size'. PRINT {mu,mean,sd/SQRT(n),zvalue,zsig,2*zsig} /FORMAT='F8.3' /CLABEL='PopMean','SampMean','SE(Mean)','Z-value','One-tailed','2-tailed' /TITLE='Z test and significance for difference with Pop. mean'. END MATRIX. * Exercises 5.1 & 5.2 *. DATA LIST LIST/group(A8) n(F8.0) mean(F8.1) sd(F8.1). BEGIN DATA Group1 62 12.2 1.8 Group2 35 10.9 2.1 END DATA. MATRIX. PRINT /TITLE='95%CI for a difference of means & Z tests'. GET gnames /VAR = group. GET mean /VAR = mean. GET sd /VAR = sd. GET n /VAR = n. PRINT {mean,sd,n} /FORMAT='F8.1' /RNAME=gnames /CLABEL='Mean','SD','N' /TITLE='Input data'. ************ Exercise 5.1 (two samples Z test) ************. COMPUTE sediff=SQRT(MSUM(sd&**2/n)). COMPUTE lower=(mean(1)-mean(2))-1.96*sediff. COMPUTE upper=(mean(1)-mean(2))+1.96*sediff. COMPUTE zvalue=(mean(1)-mean(2))/sediff. COMPUTE zsig=1-CDFNORM(ABS(zvalue)). PRINT sediff /FORMAT='F8.2' /TITLE='SE(diff)'. PRINT {lower,upper} /FORMAT='F8.2' /CLABEL='Lower','Upper' /TITLE='95% CI for difference in means'. PRINT {zvalue,zsig,2*zsig} /FORMAT='F8.3' /CLABEL='Z-value','One-tailed','2-tailed' /TITLE='Z test and significance for difference in means'. ************ Exercise 5.2 (one sample Z test) ************. COMPUTE mu=14.4. /* We add manually the value for mu *. COMPUTE zvalue2=(mean(1)-mu)/(sd(1)/SQRT(n(1))). COMPUTE zsig2=1-CDFNORM(ABS(zvalue2)). PRINT {mu,mean(1),sd(1)/SQRT(n(1)),zvalue2,zsig2,2*zsig2} /FORMAT='F8.3' /CLABEL='PopMean','SampMean','SEM','Z-value','One-tailed','2-tailed' /TITLE='Z test and significance for difference with Pop. mean'. END MATRIX. ************* * CHAPTER 6 * ************* * http://bmj.bmjjournals.com/collections/statsbk/6.shtml * Standard error of the difference between proportions, significance test & CI *. DATA LIST LIST/surgery gender count (3 F8.0). BEGIN DATA 1 1 73 1 2 47 2 1 363 2 2 277 END DATA. WEIGHT BY count. VALUE LABEL surgery 1'Appendicitis' 2'Other' /gender 1'Women' 2'Men'. CROSSTABS /TABLES=surgery BY gender /FORMAT= AVALUE TABLES /CELLS= COUNT ROW. MATRIX. PRINT /TITLE='95%CI for a difference in percentages & Z test'. GET data /VAR=count. COMPUTE a=data(1). COMPUTE b=data(2). COMPUTE c=data(3). COMPUTE d=data(4). PRINT {a,b,a+b;c,d,c+d;a+c,b+d,a+b+c+d} /FORMAT='F8.0' /RLABEL='Row 1','Row 2','Total' /CLABEL='Col 1','Col 2','Total' /TITLE='Input data'. COMPUTE num=data(1:3:2). COMPUTE den={CSUM(data(1:2));CSUM(data(3:4))}. RELEASE data. COMPUTE perc=100*num/den. COMPUTE sediff1=SQRT(MSUM(perc&*(100-perc)/den)). COMPUTE lower=perc(1)-perc(2)-1.96*sediff1. COMPUTE upper=perc(1)-perc(2)+1.96*sediff1. COMPUTE commonp=100*MSUM(num)/MSUM(den). COMPUTE sediff2=SQRT(commonp*(100-commonp)*MSUM(1/den)). COMPUTE zvalue=(perc(1)-perc(2))/sediff2. COMPUTE zsig=1-CDFNORM(ABS(zvalue)). PRINT {T(perc),perc(1)-perc(2)} /FORMAT='F8.2' /CLABEL='P-1','P-2','Diff.' /TITLE='Percentages & Difference'. PRINT sediff1 /FORMAT='F8.2' /TITLE='SE for difference in percentages'. PRINT {lower,upper} /FORMAT='F8.2' /CLABEL='Lower','Upper' /TITLE='95% CI for the difference in percentages'. PRINT sediff2 /FORMAT='F8.2' /TITLE='SE for difference in percentages (asumming H0)'. PRINT {zvalue,zsig,2*zsig} /FORMAT='F8.3' /CLABEL='Z-value','One-tailed','2-tailed' /TITLE='Z test and significance for difference in proportions'. END MATRIX. ********* * EXTRA * ********* * Table 6.1 of the written edition of the book only *. DATA LIST LIST/gender surgery count (3 F8.0). BEGIN DATA 1 1 73 1 2 363 2 1 47 2 2 277 END DATA. WEIGHT BY count. VALUE LABEL surgery 1'Appendicitis' 2'Other' /gender 1'Females' 2'Males'. * This will also answer exercise 8.7 of the written edition of the book *. CROSSTABS /TABLES=gender BY surgery /FORMAT= AVALUE TABLES /STATISTICS= RISK /CELLS= COUNT . * Using MATRIX *. SORT CASES BY gender(A) surgery(A). MATRIX. PRINT /TITLE='95% CI for OR & RR'. GET data /VAR=count. COMPUTE a=data(1). COMPUTE b=data(2). COMPUTE c=data(3). COMPUTE d=data(4). RELEASE data. PRINT {a,b,a+b;c,d,c+d;a+c,b+d,a+b+c+d} /RLABEL='Row 1','Row 2','Total' /CLABEL='Col 1','Col 2','Total' /TITLE='Input data' /FORMAT='F8.0'. COMPUTE oddr=(a*d)/(b*c). COMPUTE selog=SQRT(1/a + 1/b + 1/c + 1/d). COMPUTE lower=EXP(LN(oddr)-1.96*selog). COMPUTE upper=EXP(LN(oddr)+1.96*selog). PRINT {oddr,lower,upper} /FORMAT='F8.2' /CLABEL='OR','Low95%CI','Upp95%CI' /TITLE='Odds Ratio and 95%CI'. * More extra: although not covered by the book, a 95%CI can be also computed for RR *. COMPUTE rr=(a/(a+b))/(c/(c+d)). COMPUTE selogrr=SQRT(1/a - 1/(a+b) + 1/c - 1/(c+d)). COMPUTE rrlower=EXP(LN(rr)-1.96*selogrr). COMPUTE rrupper=EXP(LN(rr)+1.96*selogrr). PRINT {rr,rrlower,rrupper} /FORMAT='F8.2' /CLABEL='RR','Low95%CI','Upp95%CI' /TITLE='Risk Ratio and 95%CI'. END MATRIX. ********************** * END OF EXTRA TABLE * ********************** * Standard error of a total (& difference between two totals) *. DATA LIST LIST/n1 n2 (2 F8.0). BEGIN DATA 276 246 END DATA. COMPUTE zvalue = (n1-n2)/SQRT(n1+n2). COMPUTE pvalue = 2*(1-CDF.NORMAL(ABS(zvalue),0,1)). LIST. * Table 6.1 & 6.2 Aphtous ulcer in 54 pairs of patients (in the written edition, these are tables 6.2 & 6.3) *. DATA LIST LIST/pair_a pair_b count (3 F8.0). BEGIN DATA 1 1 16 1 2 23 2 1 10 2 2 5 END DATA. WEIGHT BY count. VALUE LABEL pair_a pair_b 1'Responded' 2'Did not respond'. CROSSTABS /TABLES=pair_a BY pair_b /FORMAT= AVALUE TABLES /CELLS= COUNT TOTAL. MATRIX. PRINT /TITLE='95%CI for a difference in percentages & Z test (paired data)'. GET data /VAR=count. COMPUTE a=data(1). COMPUTE b=data(2). COMPUTE c=data(3). COMPUTE d=data(4). PRINT {a,b,a+b;c,d,c+d;a+c,b+d,a+b+c+d} /RLABEL='Row 1','Row 2','Total' /CLABEL='Col 1','Col 2','Total' /TITLE='Input data' /FORMAT='F8.0'. COMPUTE n=data(2:3). COMPUTE totaln=CSUM(data). RELEASE data. COMPUTE zvalue=(ABS(n(1)-n(2))-1)/SQRT(n(1)+n(2)). COMPUTE zsig=1-CDFNORM(ABS(zvalue)). COMPUTE sediff=(1/totaln)*SQRT(n(1)+n(2)-((n(1)-n(2))**2)/totaln). COMPUTE perc=n/totaln. COMPUTE lower=perc(1)-perc(2)-1.96*sediff. COMPUTE upper=perc(1)-perc(2)+1.96*sediff. PRINT {perc(1),perc(2),perc(1)-perc(2),lower,upper} /FORMAT='F8.3' /CLABEL='P1','P2','Diff.','Lower','Upper' /TITLE='Percentages, difference & 95% CI for the difference in percentages'. PRINT sediff /FORMAT='F8.3' /TITLE='SE(diff)'. PRINT {zvalue,zsig,2*zsig} /FORMAT='F8.3' /CLABEL='Z-value','One-tailed','2-tailed' /TITLE='Z test and significance for difference in proportions'. END MATRIX. * Exercise 6.1 (I have changed sample size to 184 in 2nd sample, I think it was wrong, because no sample of size 185 can give a percentage of 20.2%, as the text stated) *. DATA LIST LIST/hospital forceps count (3 F8.0). BEGIN DATA 1 1 57 1 2 263 2 1 39 2 2 145 END DATA. WEIGHT BY count. VALUE LABEL hospital 1'Hospital 1' 2'Hospital 2' /forceps 1'Yes' 2'No'. * Comment: See that 39/184=20.2% *. CROSSTABS /TABLES=hospital BY forceps /FORMAT= AVALUE TABLES /CELLS= COUNT ROW. * Use of RANK to compute samples sizes (with thanks to Ray Levesque) *. RANK VARIABLES=forceps(A) BY hospital /N into n /PRINT=NO. COMPUTE perc=100*count/n. COMPUTE seperc=SQRT(perc*(100-perc)/n). SUMMARIZE /TABLES=hospital forceps count n perc seperc /FORMAT=LIST NOCASENUM TOTAL /TITLE='Standar error of percentages' /CELLS=NONE. MATRIX. PRINT /TITLE='95%CI for a difference in percentages & Z test (independent samples)'. GET data /VAR=count. PRINT {data(1),data(2),CSUM(data(1:2)); data(3),data(4),CSUM(data(3:4)); CSUM(data(1:3:2)),CSUM(data(2:4:2)),CSUM(data(1:4))} /RLABEL='Row 1','Row 2','Total' /CLABEL='Col 1','Col 2','Total' /TITLE='Input data' /FORMAT='F8.0'. COMPUTE num=data(1:3:2). COMPUTE den={CSUM(data(1:2));CSUM(data(3:4))}. RELEASE data. COMPUTE perc=100*num/den. COMPUTE sediff1=SQRT(MSUM(perc&*(100-perc)/den)). COMPUTE lower=perc(1)-perc(2)-1.96*sediff1. COMPUTE upper=perc(1)-perc(2)+1.96*sediff1. COMPUTE commonp=100*MSUM(num)/MSUM(den). COMPUTE sediff2=SQRT(commonp*(100-commonp)*MSUM(1/den)). COMPUTE zvalue=(perc(1)-perc(2))/sediff2. COMPUTE zsig=1-CDFNORM(ABS(zvalue)). PRINT {T(perc),perc(1)-perc(2)} /FORMAT='F8.2' /CLABEL='P1','P2','Diff.' /TITLE='Percentages & Difference'. PRINT sediff1 /FORMAT='F8.2' /TITLE='SE for difference in percentages'. PRINT {lower,upper} /FORMAT='F8.2' /CLABEL='Lower','Upper' /TITLE='95% CI for the difference in percentages'. PRINT sediff2 /FORMAT='F8.2' /TITLE='SE for difference in percentages (asumming H0)'. PRINT {zvalue,zsig,2*zsig} /FORMAT='F8.3' /CLABEL='Z-value','One-tailed','2-tailed' /TITLE='Z test and significance for difference in proportions'. END MATRIX. * Exercise 6.2 (I count 48 patients, instead of 47 as the e-book says) *. DATA LIST LIST/old new count (3 F8.0). BEGIN DATA 1 1 3 1 2 28 2 1 13 2 2 4 END DATA. WEIGHT BY count. VALUE LABEL old new 1'Responded' 2'Did not respond'. CROSSTABS /TABLES=old BY new /FORMAT= AVALUE TABLES /CELLS= COUNT . MATRIX. PRINT /TITLE='95%CI for a difference in percentages & Z test (paired data)'. GET data /VAR=count. PRINT {data(1),data(2),CSUM(data(1:2)); data(3),data(4),CSUM(data(3:4)); CSUM(data(1:3:2)),CSUM(data(2:4:2)),CSUM(data(1:4))} /RLABEL='A+','A-','Total' /CLABEL='B+','B-','Total' /TITLE='Input data' /FORMAT='F8.0'. COMPUTE n=data(2:3). COMPUTE totaln=CSUM(data). COMPUTE zvalue=(ABS(n(1)-n(2))-1)/SQRT(n(1)+n(2)). COMPUTE zsig=1-CDFNORM(ABS(zvalue)). COMPUTE sediff=(1/totaln)*SQRT(n(1)+n(2)-((n(1)-n(2))**2)/totaln). COMPUTE perc=n/totaln. COMPUTE lower=perc(1)-perc(2)-1.96*sediff. COMPUTE upper=perc(1)-perc(2)+1.96*sediff. PRINT {perc(1),perc(2),perc(1)-perc(2),lower,upper} /FORMAT='F8.3' /CLABEL='P1','P2','Diff.','Lower','Upper' /TITLE='Percentages, difference & 95% CI for the difference in percentages'. PRINT sediff /FORMAT='F8.3' /TITLE='SE(diff)'. PRINT {zvalue,zsig,2*zsig} /FORMAT='F8.3' /CLABEL='Z-value','One-tailed','2-tailed' /TITLE='Z test and significance for difference in proportions'. END MATRIX. * EXTRA: Exercise 6.4 (of the written edition of the book); data from Table 2.5 *. DATA LIST FREE/eczema hayfever count (3 F8.0). BEGIN DATA 1 1 141 1 2 420 2 1 928 2 2 13525 END DATA. VARIABLE LABEL eczema 'Eczema' /hayfever 'Hay fever'. VALUE LABEL eczema 1'E. present' 2'E. Absent' /hayfever 1'HF. present' 2'HF. Absent'. WEIGHT BY count. * Using MATRIX (we've already used CROSSTABS for that in chapter 2) *. SORT CASES BY eczema(A) hayfever(A). MATRIX. GET data /VAR=count. COMPUTE a=data(1). COMPUTE b=data(2). COMPUTE c=data(3). COMPUTE d=data(4). RELEASE data. PRINT {a,b,a+b;c,d,c+d;a+c,b+d,a+b+c+d} /RLABEL='Row 1','Row 2','Total' /CLABEL='Col 1','Col 2','Total' /TITLE='Input data' /FORMAT='F8.0'. COMPUTE oddr=(a*d)/(b*c). COMPUTE selog=SQRT(1/a + 1/b + 1/c + 1/d). COMPUTE lower=EXP(LN(oddr)-1.96*selog). COMPUTE upper=EXP(LN(oddr)+1.96*selog). PRINT {oddr,lower,upper} /FORMAT='F8.2' /CLABEL='OR','Low95%CI','Upp95%CI' /TITLE='Odds Ratio and 95%CI'. END MATRIX. ************* * CHAPTER 7 * ************* * http://bmj.bmjjournals.com/collections/statsbk/7.shtml * 95%CI for mean in small samples with summary data *. DATA LIST LIST/mean sd n (3 F8.0). BEGIN DATA 115 12 18 END DATA. * With raw data, you could use EXAMINE, but the book gives only summary data *. COMPUTE sem=sd/SQRT(n). * To compute T0.05(2-tailed)--> 1-0.05/2 = 0.975 *. COMPUTE t95 = IDF.T(0.975,n-1) . COMPUTE lower95= mean-t95*sem. COMPUTE upper95=mean+t95*sem. * To compute T0.01(2-tailed)--> 1-0.01/2 = 0.995 *. COMPUTE t99 = IDF.T(0.995,n-1) . COMPUTE lower99= mean-t99*sem. COMPUTE upper99=mean+t99*sem. VARIABLE LABEL lower95'Lower 95%CI' /upper95'Upper95%CI' lower99'Lower 99%CI' /upper99'Upper99%CI'. SUMMARIZE /TABLES=mean t95 TO upper99 /FORMAT=LIST NOCASENUM TOTAL /TITLE='95% & 99%CI for a mean in small samples' /CELLS=NONE. * Check Student's t critical points with statistical tables *. * Extra (with raw data, simulated): suppose the original data were: *. DATA LIST FREE/ bsodium(F8.0). BEGIN DATA 97 100 102 103 105 110 111 112 113 113 115 116 116 125 130 131 134 137 END DATA. EXAMINE VARIABLES=bsodium /PLOT NONE /STATISTICS DESCRIPTIVES /CINTERVAL 95 /NOTOTAL. EXAMINE VARIABLES=bsodium /PLOT NONE /STATISTICS DESCRIPTIVES /CINTERVAL 99 /NOTOTAL. * One sample t test *. DATA LIST LIST/mu mean sd (3 F8.1) n (F8.0). BEGIN DATA 2.5 3.2 1.1 18 END DATA. * With raw data, you could use T-TEST, but again the book gives summary data *. COMPUTE sem=sd/SQRT(n). COMPUTE ttest=(mean-mu)/sem. COMPUTE pvalue =2*(1- CDF.T(ABS(ttest),n-1)). VARIABLE LABEL sd'Std.Dev.' /sem 'Std. Error Mean' /ttest 't value' pvalue'Sig(2-tailed)'. FORMAT mean sd sem pvalue ttest (F8.3). SUMMARIZE /TABLES=ALL /FORMAT=LIST NOCASENUM TOTAL /TITLE='One sample t test for summary data' /CELLS=NONE. * Extra (with raw data, simulated): suppose the original data were: *. DATA LIST FREE/pcalcium(F8.2). BEGIN DATA 1.68 1.75 1.79 2.04 2.44 2.71 2.76 3.21 3.30 3.32 3.33 3.34 3.39 3.48 3.66 4.77 5.23 5.40 END DATA. VARIABLE LABEL pcalcium"Plasma calcium conc. (mmol/l) in patients with Everley's syndrome". T-TEST TESTVAL = 2.5 /VARIABLES = pcalcium /CRITERIA = CI(.95). * Compare the results (they are the same, but T-TEST gives the 95%CI for mean diffs) *. * Table 7.1 Transit times: unpaired comparison *. DATA LIST FREE/treatmnt trantime (2 F8.0). BEGIN DATA 1 44 1 51 1 52 1 55 1 60 1 62 1 66 1 68 1 69 1 71 1 71 1 76 1 82 1 91 1 108 2 52 2 64 2 68 2 74 2 79 2 83 2 84 2 88 2 95 2 97 2 101 2 116 END DATA. VARIABLE LABEL treatmnt 'Treatments'/trantime 'Transit times (h)'. VALUE LABEl treatmnt 1'Treatment A' 2'Treatment B'. * Some descriptives *. MEANS TABLES=trantime BY treatmnt /CELLS SUM MEAN STDDEV SEMEAN COUNT . * Here, we have raw data, and T-TEST can be used *. T-TEST GROUPS = treatmnt(1 2) /VARIABLES = trantime /CRITERIA = CI(.95) . * Topic not covered by the book: homogeneity of variances (HOV) can be tested with different tests. SPSS uses Levene test; another way is a F ratio test (see MATRIX code below) *. * Advanced: We can aggregate the data and use MATRIX to compute the statistics *. AGGREGATE /OUTFILE='C:\Temp\agg.sav' /BREAK=treatmnt /mean = MEAN(trantime) /sd = SD(trantime) /N = N. * We have aggregated the data to an external file (leaving the dataset untouched) *. MATRIX. PRINT /TITLE='Two samples T-test'. * Data are read from the external file *. GET mean /VAR = mean /FILE='c:\Temp\agg.sav'. GET sd /VAR = sd /FILE='c:\Temp\agg.sav'. GET n /VAR = n /FILE='c:\Temp\agg.sav'. * F-ratio (Hartley) test *. COMPUTE ftest=(CMAX(sd)/CMIN(sd))&**2. DO IF sd(1) GE sd(2). - COMPUTE fsig=(1-fcdf(ftest,n(1)-1,n(2)-1)). ELSE. - COMPUTE fsig=(1-fcdf(ftest,n(2)-1,n(1)-1)). END IF. * SE of difference & t-test (both with equal/unequal variances) *. COMPUTE s2p=MSUM((n-1)&*(sd&**2))/(MSUM(n)-2). COMPUTE sediff1=SQRT(s2p&*(MSUM(1/n))). COMPUTE sediff2=SQRT(MSUM((sd&**2)&/n)). COMPUTE tvalue1=(mean(1)-mean(2))/sediff1. COMPUTE tsig1=2*(1-TCDF(ABS(tvalue1),MSUM(n)-2)). COMPUTE tvalue2=(mean(1)-mean(2))/sediff2. COMPUTE df=((MSUM((sd&**2)&/n))**2)/MSUM((((sd&**2)/n)&**2)&/(n-1))). COMPUTE tsig2=2*(1-TCDF(ABS(tvalue2),df)). * Reports *. PRINT {n,mean,sd,sd/SQRT(n)} /FORMAT='F8.1' /CLABEL='N','Mean','SD','SE(Mean)' /RLABEL='Group A','Group B' /TITLE='Input data'. PRINT {SQRT(ftest),ftest,fsig} /FORMAT='F8.3' /CLABEL='Ratio','F-test','Sig' /TITLE='Ratio of SD & Significance (F-test)'. PRINT {tvalue1,sediff1,MSUM(n)-2,tsig1;tvalue2,sediff2,df,tsig2} /FORMAT='F8.3' /CLABEL='t-value','SE(diff)','DF','2-tail p' /RLABEL='Equal s˛','Uneq. s˛' /TITLE='t test and significance for difference in means'. END MATRIX. * Dot plot *. AGGREGATE /OUTFILE='C:\Temp\means.sav' /BREAK=treatmnt /trantime = MEAN(trantime). ADD FILES /FILE=* /FILE='C:\Temp\means.sav' /IN=source. VALUE LABEL source 0'Data' 1'Mean'. GRAPH /SCATTERPLOT(BIVAR)=treatmnt WITH trantime BY source. * Table 7.2 Transit times: paired comparison *. DATA LIST LIST/time_a time_b (2 F8.0). BEGIN DATA 63 55 54 62 79 108 68 77 87 83 84 78 92 79 57 94 66 69 53 66 76 72 63 77 END DATA. VARIABLE LABEL time_a 'Transit times with A (h)'/time_b'Transit times with B (h)'. COMPUTE diffa_b = time_a-time_b . FORMAT diffa_b (F8.0). SUMMARIZE /TABLES=time_a time_b diffa_b /FORMAT=LIST NOCASENUM TOTAL /TITLE='Transit times & their differences' /CELLS=SUM MEAN STDDEV SEMEAN COUNT . * Again, we have raw data *. T-TEST PAIRS = time_a WITH time_b /CRITERIA = CI(.95). * Advanced: to run the test manually, we must first aggregate the data: *. COMPUTE x=1. AGGREGATE /OUTFILE=* /BREAK=x /mean = MEAN(diffa_b) /sd = SD(diffa_b) /N=N. * Statistics & test *. COMPUTE sem=sd/SQRT(n). COMPUTE tvalue = IDF.T(0.975,n-1) . COMPUTE lower= mean-tvalue*sem. COMPUTE upper=mean+tvalue*sem. COMPUTE ttest=(mean-0)/sem. COMPUTE df=n-1. COMPUTE pvalue =2*(1- CDF.T(ABS(ttest),n-1)). * Format output & report *. VARIABLE LABEL sd'Std.Dev.' /sem 'Std. Error Mean' /ttest 't value' /lower'Lower 95%CI' /upper'Upper95%CI' pvalue'Sig(2-tailed)'. FORMAT df(F8.0) /mean sd sem lower upper pvalue ttest (F8.3). SUMMARIZE /TABLES=mean sd sem lower TO pvalue /FORMAT=LIST NOCASENUM TOTAL /TITLE='Paired samples t test' /CELLS=NONE. * Exercise 7.1 *. DATA LIST LIST/mean (F8.0) sd (F8.1) n (F8.0). BEGIN DATA 39 3.4 22 END DATA. COMPUTE sem=sd/SQRT(n). COMPUTE tvalue = IDF.T(0.975,n-1) . COMPUTE lower= mean-tvalue*sem. COMPUTE upper=mean+tvalue*sem. VARIABLE LABEL lower'Lower 95%CI' /upper'Upper95%CI'. SUMMARIZE /TABLES=ALL /FORMAT=LIST NOCASENUM TOTAL /TITLE='95%CI for a mean in small samples' /CELLS=NONE. * Exercise 7.2 *. DATA LIST LIST/mu mean sd (3 F8.1) n (F8.0). BEGIN DATA 1.2 1.7 0.8 18 END DATA. COMPUTE sem=sd/SQRT(n). COMPUTE ttest=(mean-mu)/sem. COMPUTE pvalue =2*(1- CDF.T(ABS(ttest),n-1)). * Format output & report *. VARIABLE LABEL sd'Std.Dev.' /sem 'Std. Error Mean' /ttest 't value' pvalue'Sig(2-tailed)'. FORMAT sem ttest pvalue (F8.3). SUMMARIZE /TABLES=ALL /FORMAT=LIST NOCASENUM TOTAL /TITLE='One sample t test for summary data' /CELLS=NONE. * Exercise 7.3 *. DATA LIST FREE/ward (F8.0) haemog (F8.1). BEGIN DATA 1 12.2 1 11.1 1 14.0 1 11.3 1 10.8 1 12.5 1 12.2 1 11.9 1 13.6 1 12.7 1 13.4 1 13.7 2 11.9 2 10.7 2 12.3 2 13.9 2 11.1 2 11.2 2 13.3 2 11.4 2 12.0 2 11.1 END DATA. VALUE LABEL ward 1'Ward A' 2'Ward B'. VARIABLE LABEL haemog 'Haemoglobin (g/dl)'. T-TEST GROUPS = ward(1 2) /VARIABLES = haemog /CRITERIA = CI(.95) . * Exercise 7.4 *. DATA LIST LIST/standard new (2 F8.0). BEGIN DATA 35 27 104 52 27 46 53 33 72 37 64 82 97 51 121 92 86 68 41 62 END DATA. VARIABLE LABEL standard 'Treatment times with Standard treatment' /new 'Treatment times with New treatment'. T-TEST PAIRS = standard WITH new /CRITERIA = CI(.95). ************* * CHAPTER 8 * ************* * http://bmj.bmjjournals.com/collections/statsbk/8.shtml * Table 8.1 & 8.2 Distribution of socioeconomic class of patients in samples A & B *. DATA LIST LIST/socioec unit count (3 F8.0). BEGIN DATA 1 1 17 1 2 5 2 1 25 2 2 21 3 1 39 3 2 34 4 1 42 4 2 49 5 1 32 5 2 25 END DATA. WEIGHT BY count. VARIABLE LABEL socioec 'Socioeconomic class' /unit ' Units at hospital'. VALUE LABEL socioec 1'I' 2'II' 3'III' 4'IV' 5'V' /unit 1'Self poisoning' 2'Gastroenterological'. CROSSTABS /TABLES=socioec BY unit /FORMAT= AVALUE TABLES /STATISTIC=CHISQ /CELLS= COUNT EXPECTED ROW. * Advanced: Calculation of Expected (E) numbers *. WEIGHT OFF. * First: rearrange dataset authomatically *. PRESERVE. SET ERRORS=NONE RESULTS=NONE. /* Stop generating output *. CASESTOVARS /ID = socioec /GROUPBY = VARIABLE /DROP=unit . WEIGHT BY count.1. /* Computing Na *. RANK VARIABLES=socioec(A) /N INTO suma/PRINT=NO. WEIGHT BY count.2. /* Computing Nb *. RANK VARIABLES=socioec(A) /N INTO sumb/PRINT=NO. WEIGHT OFF. COMPUTE sumsc=count.1+count.2. WEIGHT BY sumsc. RANK VARIABLES=socioec(A) /N INTO totaln/PRINT=NO. WEIGHT OFF. * Now, take a look at the dataset *. RESTORE. /* Resume generating output *. COMPUTE expa=suma*sumsc/totaln. COMPUTE expb=sumb*sumsc/totaln. COMPUTE resa=count.1-expa. COMPUTE resb=count.2-expb. COMPUTE chi2a=resa**2/expa. COMPUTE chi2b=resb**2/expb. FORMAT chi2a chi2b(F8.3). SUMMARIZE /TABLES=expa TO chi2b /FORMAT=LIST NOCASENUM TOTAL /TITLE='Calculation of the X˛' /CELLS=SUM . * Table 8.3 & 8.4 Fourfold tables *. DATA LIST LIST/wives breastfe count (3 F8.0). BEGIN DATA 1 1 36 1 2 14 2 1 30 2 2 25 END DATA. WEIGHT BY count. VALUE LABEL wives 1'Printers' 2'Farmers' /breastfe 1'<3 months' 2'>=3 months'. CROSSTABS /TABLES=wives BY breastfe /FORMAT= AVALUE TABLES /STATISTIC=CHISQ /CELLS= COUNT ROW EXPECTED. * Advanced: Chi-square tests & 95% CI for difference in percentages using MATRIX *. MATRIX. PRINT /TITLE='FOURFOLD TABLES (Chi˛ tests & 95%CI for difference in percentages)'. GET count /VAR=count. COMPUTE a=count(1). COMPUTE b=count(2). COMPUTE c=count(3). COMPUTE d=count(4). PRINT {a,b,a+b;c,d,c+d;a+c,b+d,a+b+c+d} /TITLE='Input data' /FORMAT='F8.0'. COMPUTE chi2=(a+b+c+d)*(a*d-b*c)&**2/((a+b)*(c+d)*(a+c)*(b+d)). COMPUTE chi2sig=1-CHICDF(chi2,1). COMPUTE chi2c=(a+b+c+d)*(ABS(a*d-b*c)-0.5*(a+b+c+d))&**2/((a+b)*(c+d)*(a+c)*(b+d)). COMPUTE chi2sigc=1-CHICDF(chi2c,1). PRINT {chi2,chi2sig;chi2c,chi2sigc} /FORMAT='F8.3' /CLABEL='Chi˛','Sig.' /RLABEL='Uncorr.','Yates' /TITLE='Chi-square tests'. COMPUTE num={b;d}. COMPUTE den={a+b;c+d}. COMPUTE perc=num/den. COMPUTE sediff=SQRT(MSUM(perc&*(1-perc)/den)). PRINT sediff /FORMAT='F8.2' /TITLE='SE for difference in percentages'. COMPUTE lower=perc(1)-perc(2)-1.96*sediff. COMPUTE upper=perc(1)-perc(2)+1.96*sediff. PRINT {perc(1),perc(2),perc(1)-perc(2),lower,upper} /FORMAT='F8.3' /CLABEL='P1','P2','Diff','Lower','Upper' /TITLE='95% CI for the difference in percentages'. END MATRIX. * Table 8.5 & 8.6 Comparing proportions & Splitting of X˛ *. DATA LIST LIST/vaccine influenza count (3 F8.0). BEGIN DATA 1 1 43 1 2 237 2 1 52 2 2 198 3 1 25 3 2 245 4 1 48 4 2 212 5 1 57 5 2 233 END DATA. WEIGHT BY count. VARIABLE LABEL vaccine 'Type of vaccine' /influenza 'Influenza'. VALUE LABEL vaccine 1'I' 2'II' 3'III' 4'IV' 5'V' /influenza 1'Got I.' 2'Avoided I.'. CROSSTABS /TABLES=vaccine BY influenza /FORMAT= AVALUE TABLES /STATISTIC=CHISQ /CELLS= COUNT EXPECTED ROW. * Read the 'Command Syntax Reference' to learn more about TEMPORARY *. TEMPORARY. SELECT IF vaccine NE 3. CROSSTABS /TABLES=vaccine BY influenza /FORMAT= AVALUE TABLES /STATISTIC=CHISQ /CELLS= COUNT EXPECTED ROW. * Table 8.7 X˛ Test for trend *. DATA LIST LIST/change groups count (3 F8.0). BEGIN DATA 1 1 100 1 2 78 2 1 175 2 2 173 3 1 42 3 2 59 END DATA. WEIGHT BY count. VARIABLE LABEL change 'Change in eating poultry'. VALUE LABEL change 1'Increase' 2'No change' 3'Decrease' /groups 1'Intervention' 2'Control'. * Assigning scores *. TEMPORARY. RECODE change (1=-1) (2=0) (3=1). VALUE LABELS change 1'' 2'' 3''. CROSSTABS /TABLES=groups BY change /FORMAT= AVALUE TABLES /STATISTIC=CHISQ /CELLS= COUNT. * Check the value of "Linear-by-linear Association"; the small difference with the value reported by the book may be due to rounding errors *. * Table 8.8 Observed vs theoretical distribution (Goodness of Fit Test) *. DATA LIST LIST/mice count (2 F8.0). BEGIN DATA 1 380 2 330 3 74 END DATA. WEIGHT BY count. VALUE LABEL mice 1'Entirely white' 2'Small brown patch' 3'Large brown patch'. NPAR TEST /CHISQUARE=mice /EXPECTED=0.510 0.408 0.082. * Small differences in statistic due to rounding errors in the book *. * Table 8.9 & 8.10 McNemar's test *. DATA LIST LIST/pair_a pair_b count (3 F8.0). BEGIN DATA 1 1 16 1 2 23 2 1 10 2 2 5 END DATA. VALUE LABEL pair_a pair_b 1'Responded' 2'Did not respond'. WEIGHT BY count. * Continuity corrected chi-square (if sample size is big enough) *. NPAR TEST /MCNEMAR= pair_a WITH pair_b. * Extra: exact p-value (for almost any sample size), see Chapter 9 of the book to learn about "exact tests" *. CROSSTABS /TABLES=pair_a BY pair_b /STATISTIC=MCNEMAR. * SPSS doesn't compute uncorrected McNemar Chi-square test, and, if sample size is low will compute the exact test instead of corrected chi-square *. * Advanced: Uncorrected chi-square (more sensitive) with MATRIX *. MATRIX. PRINT/TITLE="MC-NEMAR'S CORRECTED & UNCORRECTED CHI-SQUARE (*)". GET obs /VAR=count . COMPUTE a=obs(1). COMPUTE b=obs(2). COMPUTE c=obs(3). COMPUTE d=obs(4). RELEASE obs. PRINT {a,b,a+b;c,d,c+d;a+c,b+d,a+b+c+d} /FORMAT='F8.0' /RLABEL='A-','A+','Total' /CLABEL='B-','B+','Total' /TITLE='INPUT DATA (Counts)'. COMPUTE chi2=((b-c)&**2)&/(b+c). COMPUTE chi2sig=1-CHICDF(chi2,1). COMPUTE chi2c=((ABS(b-c)-1)&**2)&/(b+c). COMPUTE chi2sigc=1-CHICDF(chi2c,1). PRINT {chi2,chi2sig;chi2c,chi2sigc} /FORMAT='F8.4' /CLABEL='Chi^2','Sig.' /RLABEL='Uncorr.','Corr.' /TITLE='Chi-square tests (df=1) & asymptotic significances'. PRINT /TITLE='(*) Valid if Nr. of discordant pairs is greater than 10'. END MATRIX. * Exercise 8.1 *. DATA LIST LIST/drug improve count (3 F8.0). BEGIN DATA 1 1 18 1 2 23 1 3 15 1 4 9 1 5 8 2 1 12 2 2 17 2 3 19 2 4 13 2 5 9 END DATA. WEIGHT BY count. VARIABLE LABEL improve 'Improvement'. VALUE LABEL drug 1'New' 2'Standard' /improve 1'Much improved' 2'Improved' 3'Unchanged' 4'Worse' 5'Much Worse'. CROSSTABS /TABLES=drug BY improve /FORMAT= AVALUE TABLES /STATISTIC=CHISQ /CELLS= COUNT. * Exercise 8.2 *. DATA LIST LIST/living pcapitis count (3 F8.0). BEGIN DATA 1 1 36 1 2 14 2 1 30 2 2 25 END DATA. WEIGHT BY count. VALUE LABEL living 1'Nearby housing estate' 2'Elsewhere' /pcapitis 1'Yes' 2'No'. CROSSTABS /TABLES=living BY pcapitis /FORMAT= AVALUE TABLES /STATISTIC=CHISQ /CELLS= COUNT ROW. SORT CASES BY living(A) pcapitis(D). /* This sorting step is important (see below) *. MATRIX. PRINT /TITLE='95%CI for difference in percentages'. GET count /VAR=count. COMPUTE a=count(1). COMPUTE b=count(2). COMPUTE c=count(3). COMPUTE d=count(4). PRINT {a,b,a+b;c,d,c+d;a+c,b+d,a+b+c+d} /TITLE='Input data' /FORMAT='F8.0'. COMPUTE num={b;d}. /* After the sorting step, 'num' contains the number with pcapitis=Yes *. COMPUTE den={a+b;c+d}. COMPUTE perc=num/den. COMPUTE sediff=SQRT(MSUM(perc&*(1-perc)/den)). PRINT sediff /FORMAT='F8.2' /TITLE='SE for difference in percentages'. COMPUTE lower=perc(1)-perc(2)-1.96*sediff. COMPUTE upper=perc(1)-perc(2)+1.96*sediff. PRINT {perc(1),perc(2),perc(1)-perc(2),lower,upper} /FORMAT='F8.3' /CLABEL='P1','P2','Diff','Lower','Upper' /TITLE='95% CI for the difference in percentages'. END MATRIX. * Exercise 8.3 *. DATA LIST LIST/treatmnt efficacy count (3 F8.0). BEGIN DATA 1 1 10 1 2 19 2 1 5 2 2 21 END DATA. WEIGHT BY count. VALUE LABEL treatmnt 1'Standard' 2'New' /efficacy 1'Failed' 2'clearance'. CROSSTABS /TABLES=treatmnt BY efficacy /FORMAT= AVALUE TABLES /STATISTIC=CHISQ /CELLS= COUNT ROW. MATRIX. PRINT /TITLE='95%CI for difference in percentages'. GET count /VAR=count. COMPUTE a=count(1). COMPUTE b=count(2). COMPUTE c=count(3). COMPUTE d=count(4). PRINT {a,b,a+b;c,d,c+d;a+c,b+d,a+b+c+d} /TITLE='Input data' /FORMAT='F8.0'. COMPUTE num={b;d}. COMPUTE den={a+b;c+d}. COMPUTE perc=num/den. COMPUTE sediff=SQRT(MSUM(perc&*(1-perc)/den)). PRINT sediff /FORMAT='F8.2' /TITLE='SE for difference in percentages'. COMPUTE lower=perc(1)-perc(2)-1.96*sediff. COMPUTE upper=perc(1)-perc(2)+1.96*sediff. PRINT {perc(1),perc(2),perc(1)-perc(2),lower,upper} /FORMAT='F8.3' /CLABEL='P1','P2','Diff','Lower','Upper' /TITLE='95% CI for the difference in percentages'. END MATRIX. * Exercise 8.4 *. DATA LIST LIST/practice referred count (3 F8.0). BEGIN DATA 1 1 14 1 2 89 2 1 11 2 2 81 3 1 39 3 2 127 4 1 31 4 2 190 END DATA. WEIGHT BY count. VARIABLE LABEL practice 'Practices' /referred 'Asthma case referred to hospital'. VALUE LABEL practice 1'A' 2'B' 3'C' 4'D' /referred 1'Yes' 2'No'. CROSSTABS /TABLES=practice BY referred /FORMAT= AVALUE TABLES /STATISTIC=CHISQ /CELLS= COUNT EXPECTED. TEMPORARY. SELECT IF practice NE 3. CROSSTABS /TABLES=practice BY referred /FORMAT= AVALUE TABLES /STATISTIC=CHISQ /CELLS= COUNT EXPECTED. ************* * CHAPTER 9 * ************* * http://bmj.bmjjournals.com/collections/statsbk/9.shtml * Table 9.1 & 9.2 Men injured in parachute training *. DATA LIST LIST/dzone injured count (3 F8.0). BEGIN DATA 1 1 5 1 2 10 2 1 2 2 2 38 END DATA. WEIGHT BY count. VARIABLE LABEL dzone 'Dropping zone'/injured 'Sprained ankles'. VALUE LABEL dzone 1'Dropping zone A' 2'Dropping zone B' /injured 1'Yes' 2 'No'. CROSSTABS /TABLES=dzone BY injured /FORMAT= AVALUE TABLES /STATISTIC=CHISQ /CELLS= COUNT EXPECTED. * The book computes 2-tailed p values by doubling the 1-tailed value (not the same as SPSS does); WinPepi computes 2-tailed, 1-tailed & double 1-tailed p-values *. * Advanced: Mid-p value is not computed with SPSS, but it can be calculated easily *. * Exact two-tailed p-value: 0.0125610 * Point probability: 0.0115427 (how can it be computed?) * If you have Exact Tests module, the point probability is obtained by adding * "/METHOD=EXACT" to the CROSSTABS procedure *. * Mid p(2-tailed) = Two Tailed P - (Point P)/2 = 0.0125610-0.0015427/2 = 0.0067897 -> 0.0068. SET MXLOOPS=10000. /* To avoid errors in big LOOPS *. MATRIX. PRINT /TITLE='Mid-p values'. * Take fisherp value from CROSSTABS output and declare it here *. COMPUTE fisherp=0.0125610. * Read cell data *. GET data/VAR=count. COMPUTE a=data(1). COMPUTE b=data(2). COMPUTE c=data(3). COMPUTE d=data(4). RELEASE data. COMPUTE r1=a+b. COMPUTE r2=c+d. COMPUTE s1=a+c. COMPUTE s2=b+d. COMPUTE n=a+b+c+d. PRINT {a,b,r1;c,d,r2;s1,s2,n} /FORMAT='F8.0' /RLABEL='Row A','Row B','Total' /cLABEL='Col A','Col B','Total' /TITLE='INPUT DATA (Counts)'. COMPUTE vec={r1;r2;s1;s2;n;a;b;c;d}. * LN(Factorial) routine for the 9 needed values *. COMPUTE lnfact=MAKE(9,1,0). LOOP j=1 TO 9. - DO IF vec(j) GT 1. - LOOP i= 2 TO vec(j). - COMPUTE lnfact(j)=lnfact(j)+ln(i). - END LOOP. - END IF. END LOOP. * See formula for point probability in book; I have used LN and EXP to avoid working with big numbers [remember that LN(a·b)=LN(a)+LN(b), and LN(a/b)=LN(a)-LN(b)] *. COMPUTE point=EXP(CSUM(lnfact(1:4))-CSUM(lnfact(5:9))). PRINT point /FORMAT='F8.4' /TITLE='Point probability value for 2x2 contingency tables'. PRINT {fisherp,fisherp-(point/2);2*fisherp,2*(fisherp-(point/2))} /FORMAT='F8.4' /CLABEL='Fisher-p','Mid-p' /RLABEL='1-tailed','2*1-tail' /TITLE='Fisher & Mid-p values for 2x2 contingency tables'. END MATRIX. * Exercise 9.1 (extra: data are presented non aggregated - raw data -, and sorted randomly) *. DATA LIST FREE/dpartmnt septich (2 F8.0). BEGIN DATA 2 2 1 2 1 2 2 2 2 2 1 2 1 1 1 2 1 2 1 1 2 2 2 2 2 2 2 2 2 2 1 2 1 1 2 2 1 2 1 1 1 2 1 2 2 2 1 1 1 2 1 2 1 2 2 1 1 2 2 2 END DATA. VARIABLE LABEL dpartmnt 'Department'/septich 'Septic hands'. VALUE LABEL dpartmnt 1'A' 2'B' /septich 1'Yes' 2 'No'. * Here, we don't need a 'count' variable & WEIGHT by it *. * CROSSTABS will run OK in non aggregated & non ordered datasets *. CROSSTABS /TABLES=dpartmnt BY septich /FORMAT= AVALUE TABLES /STATISTIC=CHISQ /CELLS= COUNT ROW. * But all the extra calculations (95%CI & Mid-p) require aggregated data (for MATRIX) *. * We'll leave the original dataset unchanged by saving the aggregated dataset to an external file (data will be sorted as part of the aggregating process) *. AGGREGATE /OUTFILE='c:\temp\agg.sav' /BREAK=dpartmnt septich /count=N. * We are going to compute 2-tailed, 1-tailed & double 1-tailed p-values *. SET MXLOOPS=10000. /* To avoid errors in big LOOPS *. MATRIX. * Read data from external file *. GET data /VAR=count /FILE='C:\temp\agg.sav'. * Organize data for display & calculations *. COMPUTE a=data(1). COMPUTE b=data(2). COMPUTE c=data(3). COMPUTE d=data(4). RELEASE data. COMPUTE r1=a+b. COMPUTE r2=c+d. COMPUTE s1=a+c. COMPUTE s2=b+d. COMPUTE n=a+b+c+d. PRINT {a,b,r1;c,d,r2;s1,s2,n} /FORMAT='F8.0' /RLABEL='Row A','Row B','Total' /cLABEL='Col A','Col B','Total' /TITLE='INPUT DATA (Counts)'. COMPUTE vec={r1;r2;s1;s2;n;a;b;c;d}. PRINT /TITLE='1.- Fisher & Mid-p values'. * Take fisherp (one-tailed & two-tailed) values from CROSSTABS output and declare them here *. COMPUTE fisherp2 = 0.3575344. COMPUTE fisherp1 = 0.2044209. * LN(Factorial) routine for the 9 values *. COMPUTE lnfact=MAKE(9,1,0). LOOP j=1 TO 9. - DO IF vec(j) GT 1. - LOOP i= 2 TO vec(j). - COMPUTE lnfact(j)=lnfact(j)+ln(i). - END LOOP. - END IF. END LOOP. COMPUTE point=EXP(CSUM(lnfact(1:4))-CSUM(lnfact(5:9))). PRINT point /FORMAT='F8.4' /TITLE='Point probability value for this 2x2 contingency table'. * Report of 2-tailed, 1-tailed & double 1-tailed p-values (Fisher & mid-p) *. PRINT {fisherp2,fisherp2-(point/2);fisherp1,fisherp1-(point/2);2*fisherp1,2*(fisherp1-(point/2))} /FORMAT='F8.4' /CLABEL='Fisher-p','Mid-p' /RLABEL='2-tailed','1-tailed','2*1-tail' /TITLE='Fisher & Mid-p values (2-tailed, 1-tailed & double 1-tailed)'. PRINT /TITLE='2.- Confidence interval for a difference in percentages (unpaired data)'. COMPUTE num={a,c}. COMPUTE den={r1,r2}. COMPUTE perc=100*num/den. COMPUTE sediff=SQRT(MSUM(perc&*(100-perc)/den)). PRINT sediff /FORMAT='F8.2' /TITLE='SE for difference in percentages'. COMPUTE lower=perc(1)-perc(2)-1.96*sediff. COMPUTE upper=perc(1)-perc(2)+1.96*sediff. PRINT {perc(1),perc(2),perc(1)-perc(2),lower,upper} /FORMAT='F8.2' /CLABEL='P1(%)','P2(%)','Diff','Lower','Upper' /TITLE='Percentages & 95% CI for their difference'. END MATRIX. ************** * CHAPTER 10 * ************** * http://bmj.bmjjournals.com/collections/statsbk/10.shtml * Table 10.1 *. DATA LIST LIST/before after (2 F8.0). BEGIN DATA 25 18 24 27 28 25 15 20 20 17 23 24 21 24 20 22 20 19 27 19 END DATA. VARIABLE LABEL before 'Fetal movement before CVS' /after'Fetal movement after CVS'. * Using NPAR TEST *. NPAR TEST /WILCOXON=before WITH after /STATISTICS QUARTILES. * Advanced: ranking data *. COMPUTE diffb_a = before-after . FORMAT diffb_a (F8.0). VARIABLE LABEL diffb_a 'Differences before and after'. COMPUTE absdif=ABS(diffb_a). RANK VARIABLES=absdif(A) /RANK INTO rdiffs /PRINT=NO /TIES=MEAN . COMPUTE sranks=rdiffs*diffb_a/absdif. FORMAT rdiffs sranks(F8.1). SUMMARIZE /TABLES=before after diffb_a rdiffs sranks /FORMAT=LIST NOCASENUM TOTAL /TITLE='Original & ranked data' /CELLS=NONE. * Dot plot (with median, not mean value) *. COMPUTE const = 1. VARIABLE LABEL const 'Dot plot with median value'. AGGREGATE /OUTFILE='C:\Temp\median.sav' /BREAK=const /diffb_a = MEDIAN(diffb_a). ADD FILES /FILE=* /FILE='C:\Temp\median.sav' /IN=source. VALUE LABEL source 0'Data' 1'Median'. GRAPH /SCATTERPLOT(BIVAR)=const WITH diffb_a BY source. SELECT IF source EQ 0. /* Get rid of the median value *. * Not included in the book: instead of an inspection of the plot, symmetry can be checked better with the skewness index *. DESCRIPTIVES VARIABLES=diffb_a /STATISTICS=SKEWNESS . * Rule: if ABS(skewness)<2*SE(Skewness), then data are approximately symmetric *. * You can trick SPSS to get a 95%CI for the median difference using RATIO *. * RATIO can't handle negative differences, so a suitable constant will be added to turn every data positive (10 will be OK in this case) *. TEMPORARY. COMPUTE diffb_a = 10 + diffb_a. RATIO STATISTICS diffb_a WITH const /PRINT = CIN(95) MEDIAN . * After running RATIO, the data are left unchanged *. * The 95%CI goes from 7 to 17; substracting 10: (-3; 7), median diff = 0 (different from result the book gives: -2.5 to 4.0, but the method used here is different, it doesn't assume symmetry of differences)*. * Advanced: Computation of CI for median difference using the method mentioned in the book and described in detail in 'Statistics with Confidence', by D Altman *. SET MXLOOPS=1000. /* If [n(n+1)/2]>1000, change MXLOOPS accordingly *. MATRIX. PRINT /TITLE='CONFIDENCE INTERVAL FOR DIFFERENCES BETWEEN TWO MEDIANS (PAIRED)'. PRINT /TITLE='Underlying assumption: distribution of differences is symmetrical'. * Get data: replace variable names by your own (order will affect the sign) *. GET data /VAR = before after /NAMES = vname /MISSING = OMIT. COMPUTE n=NROW(data). COMPUTE d=data(:,1)-data(:,2). RELEASE data. * Compute all averages *. COMPUTE nt=n*(n+1)/2. COMPUTE dmean=MAKE(1,nt,0). COMPUTE counter=1. LOOP i=1 TO n. - LOOP j=i TO n. - COMPUTE dmean(counter)=(d(i)+d(j))/2. - COMPUTE counter=counter+1. - END LOOP. END LOOP. * Sorting algorithm (R Ristow & J Peck) *. COMPUTE sdmean=dmean. COMPUTE sdmean(GRADE(dmean)) = dmean. RELEASE counter,dmean. * Compute median of all averages *. COMPUTE pair=(TRUNC(nt/2) EQ nt/2). /* Check if nt is odd (0) or even (1) *. DO IF pair EQ 0. /* Median formula for odd samples *. - COMPUTE median=sdmean((nt+1)/2). ELSE. /* Median formula for even samples *. - COMPUTE median=(sdmean(nt/2)+sdmean(1+nt/2))/2. END IF. PRINT median /TITLE='Difference between population medians (A - B) ' /RLABEL='Point' /FORMAT='F8.1'. * Exact or asymptotic 95% & 99% CI *. DO IF n LE 50. /* Exact Wilcoxon's critical values (Table 18.6 of Altman's book) *. - COMPUTE w95={0,0,0,0,0,1,3,4,6,9,11,14,18,22,26,30,35,41,47,53,59,66,74,82,90, 99,108,117,127,138,148,160,171,183,196,209,222,236,250,265,280,295 ,311,328,344,362,379,397,416,735}. - COMPUTE w99={0,0,0,0,0,0,0,0,1,2,4,6,8,10,13,16,20,24,28,33,38,43,49,55,62,69, 76,84,92,101,110,119,129,139,149,160,172,183,195,208,221,234,248, 262,277,292,308,323,340,356,374}. - COMPUTE u95=w95(n). - COMPUTE u99=w99(n). - RELEASE w95,w99. - PRINT /TITLE'Exact confidence intervals calculated (N=<50)'. ELSE. /* Asymptotic Wilcoxon's critical values *. - COMPUTE u95=1+TRUNC(nt/2-1.959964*sqrt(n*(n+1)*+(2*n+1)/24)). - COMPUTE u99=1+TRUNC(nt/2-2.575829*sqrt(n*(n+1)*+(2*n+1)/24)). - PRINT /TITLE='Asymptotic confidence intervals calculated (N>50)'. END IF. DO IF u95 EQ 0. - PRINT /TITLE='Warning: sample size is too low (<6) for accurate 95%CI. Discard it.'. - COMPUTE u95=1. /* Replace 0 by 1 to avoid a computing error *. END IF. DO IF u99 EQ 0. - PRINT /TITLE='Warning: sample size is too low (<9) for accurate 99%CI. Discard it.'. - COMPUTE u99=1. /* Replace 0 by 1 to avoid a computing error *. END IF. COMPUTE low95=sdmean(u95). COMPUTE high95=sdmean(nt+1-u95). COMPUTE low99=sdmean(u99). COMPUTE high99=sdmean(nt+1-u99). PRINT {low95,high95;low99,high99} /FORMAT='F8.1' /TITLE='CI for difference between medians' /RLABEL='95%Level' '99%Level' /CLABEL='Lower' 'Upper'. END MATRIX. * Table 10.2 & 10.3 [there are differences between T10.2 and T10.3; I have used the data shown in table 10.3, to make the results agree with the ones presented in the book and to agree with Altman's book (Statistics with Confidence) data & results, and with the 10th edition of the book] *. DATA LIST FREE/therapy pglobuli (2 F8.0). BEGIN DATA 1 38 1 26 1 29 1 41 1 36 1 31 1 32 1 30 1 35 1 33 2 45 2 28 2 27 2 38 2 40 2 42 2 39 2 39 2 34 2 45 END DATA. /* | This 34 was wrongly replaced by 40 in T10-2 *. VALUE LABEL therapy 1'Standard treatment' 2'New therapy'. VARIABLE LABEL pglobuli 'Plasma globulin fraction'. NPAR TESTS /M-W= pglobuli BY therapy(1 2). * Advanced: ranking data *. RANK VARIABLES=pglobuli(A) /RANK INTO rglobuli /PRINT=NO /TIES=MEAN . FORMAT rglobuli (F8.1). SORT CASES BY rglobuli (A) . SUMMARIZE /TABLES=therapy pglobuli rglobuli /FORMAT=LIST NOCASENUM TOTAL /TITLE='Original & ranked data' /CELLS=NONE. MEANS TABLES=rglobuli BY therapy /CELLS COUNT SUM MEAN . * Dot plot *. AGGREGATE /OUTFILE='C:\Temp\medians.sav' /BREAK=therapy /pglobuli = MEDIAN(pglobuli). ADD FILES /FILE=* /FILE='C:\Temp\medians.sav' /IN=source. VALUE LABEL source 0'Data' 1'Median'. GRAPH /SCATTERPLOT(BIVAR)=therapy WITH pglobuli BY source. SELECT IF source EQ 0. /* Get rid of extra row *. * Advanced: Computation of CI for median difference using the method mentioned in the book and described in detail in 'Statistics with Confidence', by D Altman *. MATRIX. PRINT /TITLE='CONFIDENCE INTERVAL FOR DIFFERENCES BETWEEN TWO MEDIANS (UNPAIRED)'. PRINT /TITLE='Underlying assumption: the distributions are similar in shape'. * Get unsorted data; replace variable names by your own (grouping variable goes first) *. GET nsdata /VAR=therapy pglobuli /NAMES=vname /MISSING = OMIT. * First of all, sort them by grouping variable (algorithm by R Ristow & J Peck) *. COMPUTE data = nsdata. COMPUTE data(GRADE(nsdata(:,1)),:) = nsdata. RELEASE nsdata. * Get group values, count sample sizes & split data in two groups *. COMPUTE totaln=NROW(data). COMPUTE ngrp1=data(1,1). COMPUTE ngrp2=data(totaln,1). COMPUTE groupvar=vname(1). PRINT {ngrp1;ngrp2} /FORMAT='F8.0' /RLABEL='Group A=' 'Group B=' /CNAME=groupvar /TITLE='Group values'. COMPUTE n=CSUM(DESIGN(data(:,1))). PRINT {T(n)} /FORMAT='F8.0' /RLABEL=' N(a)=' ' N(b)=' /CNAME=groupvar /TITLE='Sample sizes'. DO IF RMIN(n) GT 1. /* Don't compute CI if any n(i)=1. COMPUTE group1=data(1:n(1),2). COMPUTE group2=data((n(1)+1):totaln,2). * Compute vector of all differences *. COMPUTE n1n2=n(1)&*n(2). COMPUTE diff=MAKE(1,n1n2,0). COMPUTE counter=1. LOOP i=1 TO n(1). - LOOP j=1 TO n(2). - COMPUTE diff(counter)=group1(i)-group2(j). - COMPUTE counter=counter+1. - END LOOP. END LOOP. * Sort differences (R Ristow & J Peck) *. COMPUTE sdiff = diff. COMPUTE sdiff(grade(diff)) = diff. RELEASE data,diff,group1,group2. /* Get rid of now useless data *. * Compute median of differences *. COMPUTE pair=(TRUNC(n1n2/2) EQ n1n2/2). /* Check if n1n2 is odd (0) or even (1) *. DO IF pair EQ 0. /* Median formula for odd samples. - COMPUTE median=sdiff((n1n2+1)/2). ELSE. /* Median formula for even samples *. - COMPUTE median=(sdiff(n1n2/2)+sdiff(1+(n1n2/2)))/2. END IF. RELEASE pair. COMPUTE depvarn=vname(2). PRINT median /TITLE='Difference between population medians (A - B) ' /CNAME=depvarn /RLABEL=' Point' /FORMAT='F8.1'. * Exact or asymptotic 95% & 99% CI *. DO IF ((n(1) LE 25) AND (n(2) LE 25)). /* Exact critical values (Table 18.5 of Altman's book)*. - COMPUTE d95={0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0; 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0; 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0; 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0; 0,0,0,0,3,4,6,7,8,9,10,12,13,14,15,16,18,19,20,21,23,24,25,26,28; 0,0,0,0,4,6,7,9,11,12,14,15,17,18,20,22,23,25,26,28,30,31,33,34,36; 0,0,0,0,6,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45; 0,0,0,0,7,9,11,14,16,18,20,23,25,27,30,32,35,37,39,42,44,46,49,51,54; 0,0,0,0,8,11,13,16,18,21,24,27,29,32,35,38,40,43,46,49,51,54,57,60,63; 0,0,0,0,9,12,15,18,21,24,27,30,34,37,40,43,46,49,53,56,59,62,65,68,72; 0,0,0,0,10,14,17,20,24,27,31,34,38,41,45,48,52,56,59,63,66,70,74,77,81; 0,0,0,0,12,15,19,23,27,30,34,38,42,46,50,54,58,62,66,70,74,78,82,86,90; 0,0,0,0,13,17,21,25,29,34,38,42,46,51,55,60,64,68,73,77,81,86,90,95,99; 0,0,0,0,14,18,23,27,32,37,41,46,51,56,60,65,70,75,79,84,89,94,99,103,108; 0,0,0,0,15,20,25,30,35,40,45,50,55,60,65,71,76,81,86,91,97,102,107,112,118; 0,0,0,0,16,22,27,32,38,43,48,54,60,65,71,76,82,87,93,99,104,110,116,121,127; 0,0,0,0,18,23,29,35,40,46,52,58,64,70,76,82,88,98,100,106,112,118,124,130,136; 0,0,0,0,19,25,31,37,43,49,56,62,68,75,81,87,98,100,107,113,120,126,133,139,146; 0,0,0,0,20,26,33,39,46,53,59,66,73,79,86,93,100,107,114,120,127,134,141,148,155; 0,0,0,0,21,28,35,42,49,56,63,70,77,84,91,99,106,113,120,128,135,142,150,157,164; 0,0,0,0,23,30,37,44,51,59,66,74,81,89,97,104,112,120,127,135,143,151,158,166,174; 0,0,0,0,24,31,39,46,54,62,70,78,86,94,102,110,118,126,134,142,151,159,167,175,183; 0,0,0,0,25,33,41,49,57,65,74,82,90,99,107,116,124,133,141,150,158,167,176,184,193; 0,0,0,0,26,34,43,51,60,68,77,86,95,103,112,121,130,139,148,157,166,175,184,193,202; 0,0,0,0,28,36,45,54,63,72,81,90,99,108,118,127,136,146,155,164,174,183,193,202,212}. - COMPUTE d99={0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0; 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0; 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0; 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0; 0,0,0,0,1,2,2,3,4,5,6,7,8,8,9,10,11,12,13,14,15,15,16,17,18; 0,0,0,0,2,3,4,5,6,7,8,10,11,12,13,14,16,17,18,19,20,22,23,24,25; 0,0,0,0,2,4,5,7,8,10,11,13,14,16,17,19,20,22,23,25,26,28,30,31,33; 0,0,0,0,3,5,7,8,10,12,14,16,18,19,21,23,25,27,29,31,33,35,36,38,40; 0,0,0,0,4,6,8,10,12,14,17,19,21,23,25,28,30,32,34,37,39,41,44,46,48; 0,0,0,0,5,7,10,12,14,17,19,22,25,27,30,32,35,38,40,43,45,48,51,53,56; 0,0,0,0,6,8,11,14,17,19,22,25,28,31,34,37,40,43,46,49,52,55,58,61,64; 0,0,0,0,7,10,13,16,19,22,25,28,32,35,38,42,45,48,52,55,59,62,65,69,72; 0,0,0,0,8,11,14,18,21,25,28,32,35,39,43,46,50,54,58,61,65,69,73,76,80; 0,0,0,0,8,12,16,19,23,27,31,35,39,43,47,51,55,59,64,68,72,76,80,84,88; 0,0,0,0,9,13,17,21,25,30,34,38,43,47,52,56,61,65,70,74,79,83,88,92,97; 0,0,0,0,10,14,19,23,28,32,37,42,46,51,56,61,66,71,75,80,85,90,95,100,105; 0,0,0,0,11,16,20,25,30,35,40,45,50,55,61,66,71,76,82,87,92,97,103,108,113; 0,0,0,0,12,17,22,27,32,38,43,48,54,59,65,71,76,82,88,93,99,105,110,116,122; 0,0,0,0,13,18,23,29,34,40,46,52,58,64,70,75,82,88,94,100,106,112,118,124,130; 0,0,0,0,14,19,25,31,37,43,49,55,61,68,74,80,87,93,100,106,113,119,126,132,139; 0,0,0,0,15,20,26,33,39,45,52,59,65,72,79,85,92,99,106,113,119,126,133,140,147; 0,0,0,0,15,22,28,35,41,48,55,62,69,76,83,90,97,105,112,119,126,134,141,148,156; 0,0,0,0,16,23,30,36,44,51,58,65,73,80,88,95,103,110,118,126,133,141,149,156,164; 0,0,0,0,17,24,31,38,46,53,61,69,76,84,92,100,108,116,124,132,140,148,156,165,173; 0,0,0,0,18,25,33,40,48,56,64,72,80,88,97,105,113,122,130,139,147,156,164,173,181}. - COMPUTE u95=d95(n(1),n(2)). - COMPUTE u99=d99(n(1),n(2)). - PRINT /TITLE='Exact confidence intervals calculated (N(a) and N(b) =< 25)'. - DO IF u95 EQ 0. - PRINT /TITLE'Warning: sample sizes are too low for accurate 95%CI. Discard it.'. - COMPUTE u95=1. /* Replace 0 by 1 to avoid a computing error *. - END IF. - DO IF u99 EQ 0. - PRINT /TITLE='Warning: sample sizes are too low for accurate 99%CI. Discard it.'. - COMPUTE u99=1. /* Replace 0 by 1 to avoid a computing error *. - END IF. - RELEASE d95,d99. ELSE. /* Asymptotic critical values *. - COMPUTE u95=1+TRUNC(n1n2/2-1.959964*sqrt(n1n2*(n(1)+n(2)+1)/12)). - COMPUTE u99=1+TRUNC(n1n2/2-2.575829*sqrt(n1n2*(n(1)+n(2)+1)/12)). - PRINT /TITLE'Asymptotic confidence intervals calculated (N(a) and/or N(b) > 25)'. END IF. COMPUTE low95=sdiff(u95). COMPUTE high95=sdiff(n1n2+1-u95). COMPUTE low99=sdiff(u99). COMPUTE high99=sdiff(n1n2+1-u99). PRINT {low95,high95;low99,high99} /FORMAT='F8.1' /TITLE='CI for difference between medians' /RLABEL='95%Level' '99%Level' /CLABEL='Lower' 'Upper'. ELSE. - PRINT /TITLE='At least one sample size = 1. No CI can be calculated.'. END IF. END MATRIX. * Exercise 10.1 *. DATA LIST LIST/placebo newtrat (2 F8.0). BEGIN DATA 4 2 12 6 6 6 3 5 15 9 10 11 2 4 5 6 11 3 4 7 6 0 2 5 END DATA. NPAR TEST /WILCOXON=placebo WITH newtrat /STATISTICS QUARTILES. * Answer provided by the book is wrong *. * Exercise 10.2 (8 patients for 1st group, not 12 as stated) *. DATA LIST FREE/group mattacks (2 F8.0). BEGIN DATA 1 8 1 6 1 0 1 3 1 14 1 5 1 11 1 2 2 7 2 10 2 4 2 11 2 2 2 8 2 8 2 6 2 1 2 5 END DATA. VALUE LABEL group 1'New preparation' 2'Placebo'. VARIABLE LABEL mattacks'Nr. of migraine attacks in 6 months'. NPAR TESTS /M-W= mattacks BY group(1 2). ************** * CHAPTER 11 * ************** * http://bmj.bmjjournals.com/collections/statsbk/11.shtml * Table 11.1 *. DATA LIST LIST/height deadsp (2 F8.0). BEGIN DATA 110 44 116 31 124 43 129 45 131 56 138 79 142 57 150 56 153 58 155 92 156 78 159 64 164 88 168 112 174 101 END DATA. VARIABLE LABEL height 'Height (cm)' /deadsp 'Dead space (ml)'. GRAPH /SCATTERPLOT(BIVAR)= height WITH deadsp. CORRELATIONS /VARIABLES= deadsp height /STATISTICS XPROD. * Advanced: Manual calculations (EXTRA: 95%CI for R is computed, although the topic is not covered in this book; see page 89 of 'Statistics with Confidence' for technical details) *. MATRIX. PRINT /TITLE='Pearson correlation coefficient & 95%CI'. GET data /VAR=ALL /NAMES=vname /MISSING=OMIT. COMPUTE n=NROW(data). PRINT n /FORMAT='F4.0' /RLABEL='n =' /TITLE='Sample size'. * Compute the 2 means & variances at the same time *. COMPUTE mean=CSUM(data)/n. COMPUTE variance=(CSSQ(data)-n&*(mean&**2))/(n-1). PRINT {mean;variance} /FORMAT='F8.2' /RLABEL='Mean','Variance' /CNAME=vname /TITLE='Statistics'. * Extra variables to make formulae easier to understand *. COMPUTE x=data(:,1). COMPUTE y=data(:,2). COMPUTE covxy=((T(x)*y)-n*mean(1)*mean(2))/(n-1). COMPUTE r=covxy/SQRT(variance(1)*variance(2)). PRINT {covxy,r} /FORMAT='F8.3' /CLABEL='COVxy','r' /TITLE='Covariance & Pearson r'. COMPUTE tvalue=r*SQRT((n-2)/(1-r**2)). PRINT tvalue /FORMAT='F4.2' /RLABEL='t =' /TITLE='Significance test for r (df=n-2)'. * Confidence Interval (using Fisher's Rz transform) *. COMPUTE zr=.5*(LN((1+r)/(1-r))). COMPUTE sezr=1/SQRT(n-3). COMPUTE lowzr=zr-1.96*sezr. COMPUTE uppzr=zr+1.96*sezr. COMPUTE lowr=((exp(2*lowzr))-1)/((exp(2*lowzr))+1). COMPUTE uppr=((exp(2*uppzr))-1)/((exp(2*uppzr))+1). PRINT {lowr,uppr} /FORMAT='F8.2' /CLABEL="Lower" "Upper" /TITLE='95%CI for R:'. END MATRIX. * Spearman's coefficient: obtaining table 11.2 *. RANK VARIABLES = height deadsp(A) /RANK INTO rx ry/PRINT=NO /TIES=MEAN . COMPUTE d = ry - rx. COMPUTE dsquare = d**2. FORMAT rx ry d (F8.1) dsquare (F8.2). SUMMARIZE /TABLES=rx ry d dsquare /FORMAT=LIST NOCASENUM TOTAL /TITLE='Derivation of Spearman Rank Correlation' /CELLS=SUM . * SPSS calls Spearman's Correlation Coefficient "Rho" *. NONPAR CORR /VARIABLES=deadsp height /PRINT=SPEARMAN TWOTAIL NOSIG. * Manual calculations (and 95%CI for Rs) *. MATRIX. PRINT /TITLE='Spearman correlation coefficient & 95%CI'. * This code needs that the original data are ranked into variables rx&ry *. GET data /VAR=rx ry /NAMES=vname /MISSING OMIT. COMPUTE n=NROW(data). COMPUTE mean=CSUM(data)/n. COMPUTE variance=(CSSQ(data)-n&*(mean&**2))/(n-1). COMPUTE x=data(:,1). COMPUTE y=data(:,2). COMPUTE covxy=((T(x)*y)-n*mean(1)*mean(2))/(n-1). COMPUTE rs=covxy/SQRT(variance(1)*variance(2)). PRINT rs /FORMAT='F4.3' /RLABEL='Rs = ' /TITLE='Spearman Rs'. COMPUTE tvalue=rs*SQRT((n-2)/(1-rs**2)). PRINT n /FORMAT='F4.0' /RLABEL='n =' /TITLE='Sample size (should be at least 10 for reliable testing & CI)'. PRINT tvalue /FORMAT='F4.2' /RLABEL='t =' /TITLE='Significance test for Rs (df=n-2)'. * Confidence Interval (using Fisher's Rz transform) *. COMPUTE zr=.5*(LN((1+rs)/(1-rs))). COMPUTE sezr=1/SQRT(n-3). COMPUTE lowzr=zr-1.96*sezr. COMPUTE uppzr=zr+1.96*sezr. COMPUTE lowr=((exp(2*lowzr))-1)/((exp(2*lowzr))+1). COMPUTE uppr=((exp(2*uppzr))-1)/((exp(2*uppzr))+1). PRINT {lowr,uppr} /FORMAT='F8.2' /CLABEL="Lower" "Upper" /TITLE='95%CI for Rs:'. END MATRIX. * Regression with table 11.1 *. REGRESSION /STATISTICS COEFF OUTS CI R /NOORIGIN /DEPENDENT deadsp /METHOD=ENTER height . * To plot the regression line, the Scatterplot has to be edited and a Fit line added manually *. GRAPH /SCATTERPLOT(BIVAR)=height WITH deadsp. * Advanced: calculations with MATRIX *. * First, we get rid of the extra variables we have added to the dataset *. MATCH FILES/FILE = * /KEEP = deadsp height . /* Every other variable will be dropped *. EXECUTE. * Comment: 1st variable in dataset is treated as X, 2nd as Y (reverse order in "/KEEP" subcommand if you want the other regression equation) *. MATRIX. PRINT /TITLE='Simple Linear Regression'. GET data /VAR=ALL /NAMES=vname /MISSING OMIT. COMPUTE n=NROW(data). COMPUTE mean=CSUM(data)/n. COMPUTE variance=(CSSQ(data)-n&*(mean&**2))/(n-1). PRINT {mean;variance} /FORMAT='F8.2' /RLABEL='Mean','Variance' /CNAME=vname /TITLE='Statistics X Y'. COMPUTE x=data(:,1). COMPUTE y=data(:,2). COMPUTE covxy=((T(x)*y)-n*mean(1)*mean(2))/(n-1). COMPUTE r=covxy/SQRT(variance(1)*variance(2)). PRINT {covxy,r**2} /FORMAT='F8.3' /CLABEL='COVxy','R˛' /TITLE='Covariance & R-square'. COMPUTE b=covxy/variance(1). COMPUTE a=mean(2)-b*mean(1). PRINT {a,b} /FORMAT='F8.3' /CLABEL='a','b' /TITLE='Regression line'. COMPUTE sres=SQRT(variance(2)*(1-r**2)*(n-1)/(n-2)). PRINT sres /FORMAT='F8.3' /RLABEL='SDres = ' /TITLE='Residual Standard Deviation'. PRINT n /FORMAT='F4.0' /RLABEL='n =' /TITLE='Sample size'. COMPUTE seb=sres/SQRT((n-1)*variance(1)). COMPUTE tvalue=b/seb. PRINT tvalue /FORMAT='F4.2' /RLABEL='t =' /TITLE='Significance test for b (df=n-2)'. END MATRIX. * Exercise 11.1 to 11.4 *. DATA LIST LIST/ attndnce (F8.0) distance (F8.1). BEGIN DATA 21 6.8 12 10.3 30 1.7 8 14.2 10 8.8 26 5.8 42 2.1 31 3.3 21 4.3 15 9.0 19 3.2 6 12.7 18 8.2 12 7.0 23 5.1 34 4.1 END DATA. VARIABLE LABEL attndnce 'Attendance rate (%)' /distance 'Distance (miles)'. CORRELATIONS /VARIABLES=attndnce distance /PRINT=TWOTAIL NOSIG. NONPAR CORR /VARIABLES=attndnce distance /PRINT=SPEARMAN TWOTAIL NOSIG. REGRESSION /STATISTICS COEFF OUTS CI R /NOORIGIN /DEPENDENT attndnce /METHOD=ENTER distance . ************** * CHAPTER 12 * ************** * http://bmj.bmjjournals.com/collections/statsbk/12.shtml * Table 12.1 Survival in 49 patients with Duke's C cancer *. DATA LIST FREE/stime status treatmnt (3 F8.0). BEGIN DATA 3 0 1 6 1 1 6 1 1 6 1 1 6 1 1 8 1 1 8 1 1 12 1 1 12 1 1 12 0 1 15 0 1 16 0 1 18 0 1 18 0 1 20 1 1 22 0 1 24 1 1 28 0 1 28 0 1 28 0 1 30 1 1 30 0 1 33 0 1 42 1 1 1 0 2 5 0 2 6 1 2 6 1 2 9 0 2 10 1 2 10 1 2 10 0 2 12 1 2 12 1 2 12 1 2 12 1 2 12 0 2 13 0 2 15 0 2 16 0 2 20 0 2 24 1 2 24 0 2 27 0 2 32 1 2 34 0 2 36 0 2 36 0 2 44 0 2 END DATA. VARIABLE LABEL stime 'Survival time (months)' /status 'Censoring status' /treatmnt 'Treatment assigned'. VALUE LABEL status 0'Censored' 1'Death' /treatmnt 1'Control' 2'Gamma linoleic acid'. KM stime BY treatmnt /STATUS=status(1) /PRINT TABLE MEAN /PLOT SURVIVAL /TEST LOGRANK /COMPARE OVERALL POOLED. * Computing Oa & Ea is difficult, but we don't need to do that; with SPSS, RR & its 95%CI can be obtained with Cox regression *. COXREG stime /STATUS=status(1) /METHOD=ENTER treatmnt /CONTRAST (treatmnt) = INDICATOR(1) /PRINT=CI(95). * This alternate syntax also works *. COXREG stime WITH treatmnt /STATUS=status(1) /CONTRAST (treatmnt) = INDICATOR(1) /PRINT=CI(95). * Exercise 12.1 & 12.2 *. DATA LIST FREE/group time status (3 F8.0). BEGIN DATA 1 4 0 1 10 0 1 12 1 1 2 0 1 8 0 1 12 1 1 8 2 1 6 0 1 9 0 1 12 1 2 7 2 2 5 0 2 11 0 2 6 0 2 3 0 2 9 0 2 4 0 2 1 0 2 7 0 2 12 1 END DATA. VALUE LABEL status 0'Stop earlier' 1'End of test' 2'Equipment failure' /group 1'Normal weight' 2'Overweight'. * Comment: status(2) are Lost To Follow Up data (LTFU). We can ask SPSS to list them, although they will be treated as censored (see the EXTRA INFORMATION at the end of this file) *. KM time BY group /STATUS=status EVENT(0) LOST(2) /PRINT TABLE MEAN /PLOT SURVIVAL /TEST LOGRANK /COMPARE OVERALL POOLED . * COXREG accepts "LOST(2)", but ignores it completely *. COXREG time WITH group /STATUS=status EVENT(0) LOST(2) /CONTRAST (group)=Indicator /PRINT=CI(95). ************** * CHAPTER 13 * ************** * http://bmj.bmjjournals.com/collections/statsbk/13.shtml * This chapter has no statistical analyses to be run with SPSS, but it's the most important, read it carefully. * END OF SYNTAX *. ******************************************************************** EXTRA INFORMATION The difference between "censored data" and "lost to follow up data": Although people tend to think that censored data (C) are the same as lost to follow (LTFU), there IS a difference. LTFU data are cases whose status is not known, while censored data (usually, people withdrawn alive at the end of the study) are cases whose status was known at the end of the study (alive - with or without disease is a minor difference from a statistical point of view, unless you also want to study Time to Relapse, instead of Overall Survival). Loss to follow up can potentially threaten a study’s validity. (All the ASCII figures are better viewed using Courier) Consider the following survival data: |---------------------* | Event (Uncensored) |-------------------------------| Withdrawn Alive (Censored) | ----------------------* | Event (Uncensored) | -----------------------------| Withdrawn Alive (Censored) |-----------? | Lost To Follow Up (Lost) | ------------? | Lost To Follow Up (Lost) |----------* | Event (Uncensored) --------------------------------- Start End If LTFU data are related to the status, then our survival estimates can be greatly biased. If they represent true survivors whose data we don't have for some reason, then we'll underestimate survival: |---------------------* | Event (Uncensored) |-------------------------------| Withdrawn alive (Censored) | ----------------------* | Event (Uncensored) | -----------------------------| Withdrawn Alive (Censored) |-----------?-------------------| LTFU (but WA if followed) | ------------?-------| LTFU (but WA if followed) |----------* | Event (Uncensored) --------------------------------- Start End If they correspond to non registered events, then we'll be overestimating survival: |---------------------* | Event (Uncensored) |-------------------------------| Withdrawn Alive (Censored) | ----------------------* | Event (Uncensored) | -----------------------------| Withdrawn Alive (Censored) |-----------?---* | LTFU (should be event) | ------------?-----* | LTFU (should be event) |----------* | Event (Uncensored) --------------------------------- Start End More than 10% of LTFU data can compromise the statistical analysis (sorry I can't give a reference for that somewhat arbitrary figure, but it came from a reliable source that has since left the University). Other people give a rule of thumb of less than 20%.